Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
3,800 | 4,440 | Identifying Alzheimer?s Disease-Related Brain Regions
from Multi-Modality Neuroimaging Data using Sparse
Composite Linear Discrimination Analysis
Shuai Huang 1, Jing Li1, Jieping Ye 2,3, Kewei Chen 4, Teresa Wu 1, Adam Fleisher 4, Eric
Reiman 4
1
Industrial Engineering, 2Computer Science and Engineering, and 3Center for Evolutionary
Medicine and Informatics, The Biodesign Institute, Arizona State University, Tempe, USA
{shuang31, jing.li.8, jieping.ye, teresa.wu}@asu.edu
4
Banner Alzheimer?s Institute and Banner PET Center, Banner Good Samaritan Medical
Center, Phoenix, USA
{kewei.chen, adam.fleisher, eric.reiman}@bannerhealth.com
Abstract
Diagnosis of Alzheimer?s disease (AD) at the early stage of the disease development is of great
clinical importance. Current clinical assessment that relies primarily on cognitive measures proves
low sensitivity and specificity. The fast growing neuroimaging techniques hold great promise.
Research so far has focused on single neuroimaging modality. However, as different modalities
provide complementary measures for the same disease pathology, fusion of multi-modality data
may increase the statistical power in identification of disease-related brain regions. This is
especially true for early AD, at which stage the disease-related regions are most likely to be weakeffect regions that are difficult to be detected from a single modality alone. We propose a sparse
composite linear discriminant analysis model (SCLDA) for identification of disease-related brain
regions of early AD from multi-modality data. SCLDA uses a novel formulation that decomposes
each LDA parameter into a product of a common parameter shared by all the modalities and a
parameter specific to each modality, which enables joint analysis of all the modalities and
borrowing strength from one another. We prove that this formulation is equivalent to a penalized
likelihood with non-convex regularization, which can be solved by the DC (difference of convex
functions) programming. We show that in using the DC programming, the property of the nonconvex regularization in terms of preserving weak-effect features can be nicely revealed. We
perform extensive simulations to show that SCLDA outperforms existing competing algorithms on
feature selection, especially on the ability for identifying weak-effect features. We apply SCLDA
to the Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) images of
49 AD patients and 67 normal controls (NC). Our study identifies disease-related brain regions
consistent with findings in the AD literature.
1
In tro du cti on
Alzheimer?s disease (AD) is a fatal, neurodegenerative disorder that currently affects over five
million people in the U.S. It leads to substantial, progressive neuron damage that is irreversible,
which eventually causes death. Early diagnosis of AD is of great clinical importance, because
disease-modifying therapies given to patients at the early stage of their disease development will
have a much better effect in slowing down the disease progression and helping preserve some
cognitive functions of the brain. However, current clinical assessment that majorly relies on
cognitive measures proves low sensitivity and specificity in early diagnosis of AD. This is because
these cognitive measures are vulnerable to the confounding effect from some non-AD related
factors such as patients? mood, and presence of other illnesses or major life events [1]. The
confounding effect is especially severe in the diagnosis of early AD, at which time cognitive
1
impairment is not yet apparent. On the other hand, fast growing neuroimaging techniques, such as
Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET), provide great
opportunities for improving early diagnosis of AD, due to their ability for overcoming the
limitations of conventional cognitive measures. There are two major categories of neuroimaging
techniques, i.e., functional and structure neuroimaging. MRI is a typical structural neuroimaging
technique, which allows for visualization of brain anatomy. PET is a typical functional
neuroimaging technique, which measures the cerebral metabolic rate for glucose. Both techniques
have been extensively applied to AD studies. For example, studies based on MRI have
consistently revealed brain atrophy that involves the hippocampus and entorhinal cortex [2-6];
studies based on PET have revealed functional abnormality that involves the posterior temporal
and parietal association cortices [8-10], posterior cingulate, precuneus, and medial temporal
cortices [11-14].
There is overlap between the disease-related brain regions detected by MRI and those by PET,
such as regions in the hippocampus area and the mesia temporal lobe [15-17]. This is not
surprising since MRI and PET are two complementary measures for the same disease pathology,
i.e., it starts mainly in the hippocampus and entorhinal cortex, and subsequently spreads
throughout temporal and orbiogrontal cortext, poseterior cingulated, and association cortex [7].
However, most existing studies only exploited structural and functional alterations in separation,
which ignore the potential interaction between them. The fusion of MRI and PET imaging
modalities will increase the statistical power in identification of disease-related brain regions,
especially for early AD, at which stage the disease-related regions are most likely to be weakeffect regions that are difficult to be detected from MRI or PET alone. Once a good set of diseaserelated brain regions is identified, they can be further used to build an effective classifier (i.e., a
biomarker from the clinical perspective) to enable AD diagnose with high sensitivity and
specificity.
The idea of multi-modality data fusion in the research of neurodegenerative disorders has been
exploited before. For example, a number of models have been proposed to combine
electroencephalography (EEG) and functional MRI (fMRI), including parallel EEG-fMRI
independent component analysis [18]-[19], EEG-informed fMRI analysis [18] [20], and
variational Bayesian methods [18] [21]. The purpose of these studies is different from ours, i.e.,
they aim to combine EEG, which has high temporal resolution but low spatial resolution, and
fMRI, which has low temporal resolution but high spatial resolution, so as to obtain an accurate
picture for the whole brain with both high spatial and high temporal resolutions [18]-[21]. Also,
there have been some studies that include both MRI and PET data for classification [15], [22][25]. However, these studies do not make use of the fact that MRI and PET measure the same
underlying disease pathology from two complementary perspectives (i.e., structural and functional
perspectives), so that the analysis of one imaging modality can borrow strength from the other.
In this paper, we focus on the problem of identifying disease-related brain regions from multimodality data. This is actually a variable selection problem. Because MRI and PET data are highdimensional, regularization techniques are needed for effective variable selection, such as the L1regularization technique [25]-[30] and the L2/L1-regularization technique [31]. In particular,
L2/L1-regularization has been used for variable selection jointly on multiple related datasets, also
known as multitask feature selection [31], which has a similar nature to our problem. Note that
both L1- and L2/L1-regularizations are convex regularizations, which have gained them popularity
in the literature. On the other hand, there is increasing evidence that these convex regularizations
tend to produce too severely shrunken parameter estimates. Therefore, these convex
regularizations could lead to miss-identification of the weak-effect disease-related brain regions,
which unfortunately make up a large portion of the disease-related brain regions especially in early
AD. Also, convex regularizations tend to select many irrelevant variables to compensate for the
overly severe shrinkage in the parameters of the relevant variables. Considering these limitations
of convex regularizations, we study non-convex regularizations [33]-[35] [39], which have the
advantage of producing mildly or slightly shrunken parameter estimates so as to be able to
preserve weak-effect disease-related brain regions and the advantage of avoiding selecting many
disease-irrelevant regions.
Specifically in this paper, we propose a sparse composite linear discriminant analysis model,
called SCLDA, for identification of disease-related brain regions from multi-modality data. The
contributions of our paper include:
2
?
?
?
2
Formulation: We propose a novel formulation that decomposes each LDA parameter into a
product of a common parameter shared by all the data sources and a parameter specific to
each data source, which enables joint analysis of all the data sources and borrowing strength
from one another. We further prove that this formulation is equivalent to a penalized
likelihood with non-convex regularization.
Algorithm: We show that the proposed non-convex optimization can be solved by the DC
(difference of convex functions) programming [39]. More importantly, we show that in using
the DC programming, the property of the non-convex regularization in terms of preserving
weak-effect features can be nicely revealed.
Application: We apply the proposed SCLDA to the PET and MRI data of early AD patients
and normal controls (NC). Our study identifies disease-related brain regions that are
consistent with the findings in the AD literature. AD vs. NC classification based on these
identified regions achieves high accuracy, which makes the proposed method a useful tool for
clinical diagnosis of early AD. In contrast, the convex-regularization based multitask feature
selection method [31] identifies more irrelevant brain regions and yields a lower classification
accuracy.
Rev iew o f L D A a nd it s v ari an ts
!
Denote ? = ?!, ?! , ? , ?! as the variables and assume there are ? classes. Denote ?! as the
!
sample size of class ? and ? = !!! ?! is the total sample size. Let ? = ?! , ?! , ? , ?! ! be the
??? sample matrix, where ?! is the ? !! sample and ? ? is its associated class index. Let
!
! !
!
?! =
!!! ?! be the overall sample mean,
!!!,! ! !! ?! be the sample mean of class ?, ? =
!!
! !
? ??
! !!! !
!
!
?! =
!! !!!,! ! !!
! !
? ? be the
! !!! ! !
?=
!
?! ? ?
!
be the total normalized sum of squares and products (SSQP),
?! ? ?! ?! ? ?!
!
be the normalized class SSQP of class ?, and ? =
overall normalized class SSQP.
The objective of LDA is to seek for a ??? linear transformation matrix, ?! , with which ?!! ?
retains the maximum amount of class discrimination information in ?. To achieve this objective,
one approach is to seek for the ?! that maximizes the between-class variance of ?!! ?, which can
be measured by tr(?!! ??! ), while minimizing the within-class variance of ?!! ?, which can be
measured by tr(?!! ??! ). Here tr() is the matrix trace operator. This is equivalent to solving the
following optimization problem:
??! = argmax?!
??(?!
! ??! )
??(?!
! ??! )
.
(1)
Note that ?! corresponds to the right eigenvector of ? !! ? and ? = ? ? 1.
Another approach used for finding the ?! is to use the maximum likelihood estimation for
Gaussian populations that have different means and a common covariance matrix. Specifically, as
in [36], this approach is developed by assuming the class distributions are Gaussian with a
common covariance matrix, and their mean differences lie in a ?-dimensional subspace of the ?dimensional original variable space. Hastie [37] further generalized this approach by assuming
that class distributions are a mixture of Gaussians, which has more flexibility than LDA. However,
both approaches assume a common covariance matrix for all the classes, which is too strict in
many practical applications, especially in high-dimensional problems where the covariance
matrices of different classes tend to be different. Consequently, the linear transformation explored
by LDA may not be effective.
In [38], a heterogeneous LDA (HLDA) is developed to relax this assumption. The HLDA seeks
for a ??? linear transformation matrix, ?, in which only the first ? columns (?! ) contain
discrimination information and the remaining ? ? ? columns (?!!! ) contain no discrimination
information. For Gaussian models, assuming lack of discrimination information is equivalent to
assuming that the means and the covariance matrices of the class distributions are the same for all
3
classes, in the ? ? ? dimensional subspace. Following this, the log-likelihood function of ? can be
written as below [38]:
!
? ?|? = ? log ?!!!! ??!!! ?
!
!!
!
!!! ! log
?!! ?! ?! + ? log ? ,
(2)
Here ? denotes the determinant of matrix ?. There is no closed-form solution for ?. As a result,
numeric methods are needed to derive the maximum likelihood estimate for ?. It is worth
mentioning that the LDA in the form of (1) is a special case of the HLDA [38].
3
T he p ro po sed SC L DA
Suppose that there are multiple data sources, ? ! , ? ! , ? , ? ! , with each data source capturing
one aspect of the same set of physical variables, e.g., the MRI and PET capture the structural and
functional aspects of the same brain regions. For each data source, ? ! , there is a linear
transformation matrix ? ! , which retains the maximum amount of class discrimination
information in ? ! . A naive way for estimating ? = ? ! , ? ! , ? , ? ! is to separately estimate
each ? ! based on ? ! . Apparently, this approach does not take advantage of the fact that all the
data sources measure the same physical process. Also, when the sample size of each data source is
small, this approach may lead to unreliable estimates for the ? ! ?s.
To tackle these problems, we propose a composite parameterization following the line as [40].
!
Specifically, let??!,! be the element at the k-th row and l-th column of ? ! . We treat
!
!
!
!
!
!
?!,! , ?!,! , ? , ?!,! as an interrelated group and parameterize each ?!,! as ?!,! = ?! ?!,! , for
1 ? ? ? ?,?1 ? ? ? ? and 1 ? ? ? ?. In order to assure identifiability, we restrict each ?! ? 0.
Here, ?! represents the common information shared by all the data sources about variable ?, while
!
?!,! represents the specific information only captured by the ? !! data source. For example, for
disease-related brain region identification, if ?! = 0, it means that all the data sources indicate
variable ? is not a disease-related brain region; otherwise, variable ? is a disease-related brain
!
region. ?!,! ? 0 means that the ? !! data source supports this assertion.
The log-likelihood function of ? is:
?!
?| ? ! , ? ! , ? , ? !
=
!
!!!
?
!
!
!
?
! !
log ?!!! ?
!
log ?
!
!
!
!
?!!! ?
!!
!
!!! !
!
log ?! ?!
!
!
?!
+
?,
which follows the same line as (2). However, our formulation includes the following constraints
on ?:
!
!
?!,! = ?! ?!,! , ?! ? 0, 1 ? ?, ? ? ?, 1 ? ? ? ?.
!
Let ? = ?!,! , 1 ? ? ? ?, 1 ? ? ? ?, 1 ? ? ? ?
(3)
and ? = ?! , 1 ? ? ? ? . An intuitive
choice for estimation of ? and ? is to maximize the ?! ?| ? ! , ? ! , ? , ? ! ??subject to the
constraints in (3). However, it can be anticipated that no element in the estimated ? and ? will be
exactly zero, resulting in a model which is not interpretable, i.e., poor identification of diseaserelated regions. Thus, we encourage the estimation of ? and the first?? columns of ? (i.e., the
columns containing discrimination information) to be sparse, by imposing the L1-penalty on ? and
?. By doing so, we obtain the following optimization problem for the proposed SCLDA:
? = argmin? ?!
?| ? ! , ? ! , ? , ? !
= argmin? ??!
?| ? ! , ? ! , ? , ? !
!
?! !,!,! ?!,! ? , subject to
!
!
?!,! = ?! ?!,! , ?! ? 0, 1 ? ?, ? ?
+ ? ?!
! ?!
+
?,?1 ? ? ? ?.??????????????????????????????? (4)?
Here, ?! and ?! control the degrees of sparsity of ? and ?, respectively. Tuning of two
regularization parameters is difficult. Fortunately, we prove the following Theorem which
indicates that formulation (4) is equivalent to a simpler optimization problem involving only one
regularization parameter.
4
Theorem 1: The optimization problem (4) is equivalent to the following optimization problem:
? = argmin? ?! ?| ?
= argmin? ??! ?| ?
!
!
,?
!
!
,?
,?,?
!
!
,?,?
+ ??
!
!
!
!!!
!
!!!
!
?!,!
?,
(5)
!
with ? = 2 ?! ?! , i.e., ?!,! = ?!,! .
The proof can be found in the supplementary document. It can also be found in the supplementary
material how this formulation will serve the purpose of the composite parameterization, i.e.,
common information and specific information can be estimated separately and simultaneously.
The optimization problem (5) is a non-convex optimization problem that is difficult to solve. We
address this problem by using an iterative two-stage procedure known as Difference of Convex
functions (DC) programming [39]. A full description of the algorithm can be found in the
supplemental material.
4
S im ula tion s tu d ies
In this section, we conduct experiments to compare the performance of the proposed SCLDA with
sparse LDA (SLDA) [42] and multitask feature selection [31]. Specifically, as we focus on LDA,
we use the multitask feature selection method developed in [31] on LDA, denoted as MSLDA.
Both SLDA and MSLDA adopt convex regularizations. Specifically, SLDA selects features from
one single data source with L1-regularization; MSLDA selects features from multiple data sources
with L2/L1 regularization.
We evaluate the performances of these three methods across various parameters settings, including
the number of variables, ?, the number of features, ?, the number of data sources, M, sample size,
?, and the degree of overlapping of the features across different data sources, s% (the larger the
?%, the more shared features among the datasets). Definition of ?% can be found in the simulation
procedure that is included in the supplemental material. For each specification of the parameters
settings, ? datasets can be generated following the simulation procedure. We apply the proposed
SCLDA to the ? datasets, and identify one feature vector ?(!) for each dataset, with ? and ?
chosen by the method described in section 3.3. The result can be described by the number of true
positives (TPs) as well as the number of false positives (FPs). Here, true positives are the non-zero
elements in the learned feature vector ?(!) which are also non-zero in the ?! ; false positives are the
non-zero elements in ?(!) , which are actually zero in ?! . As there are ? pairs of the TPs and FPs
for the ? datasets, the average TP over the M datasets and the average FP over the M datasets are
used as the performance measures. This procedure (i.e., from data simulation, to SCLDA, to TPs
and FPs generation) can be repeated for ? times, and ? pairs of average TP and average FP are
collected for SCLDA. In a similar way, we can obtain ? pairs of average TP and average FP for
both SLDA and MSLDA.
Figures 1 (a) and (b) show comparison between SCLDA, SLDA and MSLDA by scattering the
average TP against the average FP for each method. Each point corresponds to one of the N
repetitions. The comparison is across various parameters settings, including the number of
variables (? = 100,200,500), the number of data sources (? = 2,5,10), and the degree of
overlapping of the features across different data sources (?% = 90%, 70%). Additionally, ? ? is
kept constant, ? ? = 1. A general observation is that SCLDA is better than SLDA and MSLDA
across all the parameter settings. Some specific trends can be summarized as follows: (i) Both
SCLDA and MSLDA outperform SLDA in terms of TPs; SCLDA further outperforms MSLDA in
terms of FPs. (ii) In Figure 2 (a), rows correspond to different numbers of data sources, i.e.,
? = 2,5,10 respectively. It is clear that the advantage of SCLDA over both SLDA and MSLDA is
more significant when there are more data sources. Also, MSLDA performs consistently better
than SLDA. Similar phenomena are shown in Figure 2 (b). This demonstrates that in analyzing
each data source, both SCLDA and MSLDA are able to make use of the information contained in
other data sources. SCLDA can use this information more efficiently, as SCLDA can produce less
shrunken parameter estimates than MSLDA and thus it is able to preserve weak-effect features.
(iii) Comparing Figures 2 (a) and (b), it can be seen that the advantage of SCLDA or MSLDA
over SLDA is more significant as the data sources have more degree of overlapping in their
5
features. Finally, although not presented here, our simulation shows that the three methods
perform similarly when ?% = 40 or less.
(a)
(b)
Figure 1: Average numbers of TPs vs FPs for SCLDA (green symbols ?+?), SLDA (blue symbols
?*?) and MSLDA (red symbols ?o?) (a) ?% = 90%, ? ? = 1; (b) ?% = 70%, ? ? = 1
5
C ase st ud y
5.1
Data preprocessing
Our study includes 49 AD patient and 67 age-matched normal controls (NC), with each subject of
AD or NC being scanned both by PET and MRI. The PET and MRI images can be downloaded
from the database by the Alzheimer?s Disease Neuroimaging Initiative. In what follows, we
outline the data preprocessing steps.
Each image is spatially normalized to the Montreal Neurological Institute (MNI) template, using
the affine transformation and subsequent non-linear wraping algorithm [43] implemented in the
SPM MATLAB toolbox. This is to ensure that each voxel is located in the same anatomical region
for all subjects, so that spatial locations can be reported and interpreted in a consistent manner.
Once all the images in the MNI template, we further apply the Automated Anatomical Labeling
(AAL) technique [43] to segment the whole brain of each subject into 116 brain regions. The 90
regions that belong to the cerebral cortex are selected for the later analysis, as the other regions are
not included in the cerebral cortex are rarely considered related with AD in the literature. The
measurement of each region in the PET data is regional cerebral blood flow (rCBF); the
measurement of each region in the MRI data is the structural volume of the region.
5.2
Disease-related brain regions
SCLDA is applied to the preprocessed PET and MRI data of AD and NC with the penalty
parameter selected by the AIC method mentioned in section 3. 26 disease-related brain regions are
identified from PET and 21 from MRI (see Table 1 for their names). The maps of the diseaserelated brain regions identified from PET and MRI are highlighted in Figure 2 (a) and (b),
respectively, with different colors given to neighboring regions in order to distinguish them. Each
figure is a set of horizontal cut away slices of the brain as seen from the top, which aims to
provide a full view of locations of the regions.
One major observation is that the identified disease-related brain regions from MRI are in the
hippocampus, parahippocampus, temporal lobe, frontal lobe, and precuneus, which is consistent
with the existing literature that reports structural atrophy in these brain areas. [3-6,12-14]. The
identified disease-related brain regions from PET are in the temporal, frontal and parietal lobes,
which is consistent with many functional neuroimaging studies that report reduced rCBF or
6
reduced cortical glucose metabolism in these areas [8-10, 12-14]. Many of these identified
disease-related regions can be explained in terms of the AD pathology. For example, hippocampus
is a region affected by AD the earliest and severely [6] Also, as regions in the temporal lobe are
essential for memory, damage on these regions by AD can explain the memory loss which is a
major clinic symptom of AD. The consistency of our findings with the AD literature supports
effectiveness of the proposed SCLDA.
Another finding is that there is a large overlap between the identified disease-related regions from
PET and those from MRI, which implies strong interaction between functional and structural
alterations in these regions. Although well-accepted biological mechanisms underlying this
interaction are still not very clear, there are several explanations existing in the literature. The first
explanation is that both functional and structural alterations could be the consequence of dendritic
arborizations, which results from intracellular accumulation of PHFtau and further leads to neuron
death and grey matter loss [14]. The second explanation is that the AD pathology may include a
vascular component, which may result in reduced rCBF due to limited blood supply and may
ultimately result in structural alteration such as brain atrophy [45].
(a)
(b)
Figure 2: locations of disease-related brain regions identified from (a) MRI; (b) PET
5.3
Classification accuracy
As one of our primary goals is to distinguish AD from NC, the identified disease-related brain
regions through SCLDA are further utilized for establishing a classification model. Specifically,
for each subject, the rCBF values of the 26 disease-related brain regions identified from PET and
the structural volumes of the 21 disease-related brain regions identified from MRI are used, as a
joint spatial pattern of both brain physiology and structure. As a result, each subject is associated
with a vector with 47 features/variables. Linear SVM (Support Vector Machine) is employed as
the classifier. The classification accuracy based on 10-fold cross-validation is 94.3%. For
comparison purposes, MSLDA is also applied, which identifies 45 and 38 disease-related brain
regions for PET and MRI, respectively. Linear SVM applied to the 45+38 features gives a
classification accuracy of only 85.8%. Note that MSLDA identifies a much larger number of
disease-related brain regions than SCLDA, but some of the identified regions by MSLDA may
indeed be disease-irrelevant, so including them deteriorates the classification.
5.4
Relationship between structural atrophy and abnormal rCBF, and
severity of cognitive impairment in AD
In addition to classification, it is also of interest to further verify relevance of the identified
disease-related regions with AD in an alternative way. One approach is to investigate the degree to
which those disease-related regions are relevant to cognitive impairment that can be measured by
the Alzheimer?s disease assessment scale ? cognitive subscale (ADAS-cog). ADAS measures
severity of the most important symptoms of AD, while its subscale, ADAS-cog, is the most
7
popular cognitive testing instrument used in clinic trails. The ADAS-cog consists of 11 items
measuring disturbances of memory, language, praxis, attention and other cognitive abilities that
are often affected by AD. As the total score of these 11 items provides an overall assessment of
cognitive impairment, we regress this ADAS-cog total score (the response) against the rCBF or
structure volume measurement (the predictor) of each identified brain region, using a simple
regression. The regression results are listed in Table 1.
It is not surprising to find that some regions in the hippocampus area and temporal lobes are
among the best predictors, as these regions are extensively reported in the literature as the most
severely affected by AD [3-6]. Also, it is found that most of these brain regions are weak-effect
predictors, as most of them can only explain a small portion of the variability in the ADAS-cog
total score, i.e., many R-square values in Table 1 are less than 10%. However, although the effects
are weak, most of them are significant, i.e., most of the p-values in Table 1 are smaller than 0.05.
Furthermore, it is worth noting that 70.22% variability in ADAS-cog can be explained by taking
all the 26 brain regions identified from PET as predictors in a multiple regression model; 49.72%
variability can be explained by taking all the 21 brain regions from MRI as predictors in a multiple
regression model. All this findings imply that the disease-related brain regions are indeed weakeffect features if considered individually, but jointly they can play a strong role for characterizing
AD. This verifies the suitability of the proposed SCLDA for AD studies, as SCLDA can preserve
weak-effect features.
Table 1: Explanatory power of regional rCBF and structural volume for variability in ADAS-cog
(?~? means this region is not identified from PET (or MRI) as a disease-related region by SCLDA)
PET
Brain regions
Precentral_L
Precentral_R
Frontal_Sup_L
Frontal_Sup_R
Frontal_Mid_R
Frontal_M_O_L
Frontal_M_O_R
Insula_L
Insula_R
Cingulum_A_R
Cingulum_Mid_L
Cingulum_Post_L
Hippocampus_L
Hippocampus_R
ParaHippocamp_L
6
R
2
0.003
0.044
0.051
0.044
0.056
0.036
0.019
0.016
~
0.004
0.001
0.184
0.158
~
0.206
pvalue
0.503
0.022
0.013
0.023
0.010
0.040
0.138
0.171
~
0.497
0.733
<10-4
<10-4
~
<10-4
MRI
R
2
0.027
~
0.047
~
0.072
0.086
0.126
0.163
0.125
0.082
0.040
~
~
0.242
~
PET
Brain regions
pvalue
0.077
~
0.018
~
0.003
0.001
0.000
<10-4
0.000
0.001
0.030
~
~
<10-4
~
Amygdala_L
Calcarine_L
Lingual_L
Postcentral_L
Parietal_Sup_R
Angular_R
Precuneus_R
Paracentr_Lobu_L
Pallidum_L
Pallidum_R
Heschl_L
Heschl_R
Temporal_P_S_R
Temporal_Inf_R
All regions
R
2
0.090
0.038
0.066
0.038
0.001
0.173
0.063
0.035
0.082
~
0.001
0.000
0.008
0.187
0.702
pvalue
0.001
0.034
0.005
0.035
0.677
<10-4
0.006
0.043
0.001
~
0.640
0.744
0.336
<10-4
<10-4
MRI
R
2
0.313
0.028
0.044
0.026
~
0.063
0.025
0.000
~
0.020
~
0.111
0.071
0.147
0.497
pvalue
<10-4
0.070
0.023
0.081
~
0.006
0.084
0.769
~
0.122
~
0.000
0.003
<10-4
<10-4
C on clu sio n
In the paper, we proposed a SCLDA model for identification of disease-related brain regions of
AD from multi-modality data, which is capable to preserve weak-effect disease-related brain
regions due to its less shrinkage imposed on its parameters. We applied SCLDA to the PET and
MRI data of early AD patients and normal controls. As MRI and PET measure two
complementary aspects (structural and functional aspects, respectively) of the same AD pathology,
fusion of these two image modalities can make effective use of their interaction and thus improve
the statistical power in identification of disease-related brain regions. Our findings were consistent
with the literature and also showed some new aspects that may suggest further investigation in
neuroimaging research in the future.
8
References
[1] deToledo-Morrell, L., Stoub, T.R., Bulgakova, M. 2004. MRI-derived entorhinal volume is a good predictor of
conversion from MCI to AD. Neurobiol. Aging 25, 1197?1203.
[2] Morra, J.H., Tu, Z. Validation of automated hippocampal segmentation method. NeuroImage 43, 59?68, 2008.
[3] Morra, J.H., Tu, Z. 2009a. Automated 3D mapping of hippocampal atrophy. Hum. Brain Map. 30, 2766?2788.
[4] Morra, J.H., Tu, Z. 2009b. Automated mapping of hippocampal atrophy in 1-year repeat MRI data. NeuroImage 45,
213-221.
[5] Schroeter, M.L., Stein, T. 2009. Neural correlates of AD and MCI. NeuroImage 47, 1196?1206.
[6] Braak, H., Braak, E. 1991. Neuropathological stageing of Alzheimer-related changes. Acta Neuro. 82, 239?259.
[7] Bradley, K.M., O'Sullivan. 2002. Cerebral perfusion SPET correlated with Braak pathological stage in AD. Brain
125, 1772?1781.
[8] Keilp, J.G., Alexander, G.E. 1996. Inferior parietal perfusion, lateralization, and neuropsychological dysfunction in
AD. Brain Cogn. 32, 365?383.
[9] Schroeter, M.L., Stein, T. 2009. Neural correlates of AD and MCI. NeuroImage 47, 1196?1206.
[10] Asllani, I., Habeck, C. 2008. Multivariate and univariate analysis of continuous arterial spin labeling perfusion MRI
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
in AD. J. Cereb. Blood Flow Metab. 28, 725?736.
Du,A.T., Jahng, G.H. 2006. Hypoperfusion in frontotemporal dementia and AD. Neurology 67, 1215?1220.
Ishii, K., Kitagaki, H. 1996. Decreased medial temporal oxygen metabolism in AD. J. Nucl. Med. 37, 1159?1165.
Johnson, N.A., Jahng, G.H. 2005. Pattern of cerebral hypoperfusion in AD. Radiology 234, 851?859.
Wolf, H., Jelic, V. 2003. A critical discussion of the role of neuroimaging in MCI. Acta Neuroal: 107 (4), 52-76.
Tosun, D., Mojabi, P. 2010. Joint analysis of structural and perfusion MRI for cognitive assessment and classification
of AD and normal aging. NeuroImage 52, 186-197.
Alsop, D., Casement, M. 2008. Hippocampal hyperperfusion in Alzheimer's disease. NeuroImage 42, 1267?1274.
Mosconi, L., Tsui, W.-H. 2005. Reduced hippocampal metabolism in MCI and AD. Neurology 64, 1860?1867.
Mulert, C., Lemieux, L. 2010. EEG-fMRI: physiological basis, technique and applications. Springer.
Xu, L., Qiu, C., Xu, P. and Yao, D. 2010. A parallel framework for simultaneous EEG/fMRI analysis: methodology
and simulation. NeuroImage, 52(3), 1123-1134.
Philiastides, M. and Sajda, P. 2007. EEG-informed fMRI reveals spatiotemporal characteristics of perceptual decision
making. Journal of Neuroscience, 27(48), 13082-13091.
Daunizeau, J., Grova, C. 2007. Symmetrical event-related EEG/fMRI information fusion. NeuroImage 36, 69-87.
Jagust, W. 2006. PET and MRI in the diagnosis and prediction of dementia. Alzheimer?s Dement 2, 36-42.
Kawachi, T., Ishii, K. and Sakamoto, S. 2006. Comparison of the diagnostic performance of FDG-PET and VBM.
Eur.J.Nucl.Med.Mol.Imaging 33, 801-809.
Matsunari, I., Samuraki, M. 2007. Comparison of 18F-FDG PET and optimized voxel-based morphometry for
detection of AD. J.Nucl.Med 48, 1961-1970.
Schmidt, M., Fung, G. and Rosales, R. 2007. Fast optimization methods for L1-regularization: a comparative study
and 2 new approaches. ECML 2007.
Liu, J., Ji, S. and Ye, J. 2009. SLEP: sparse learning with efficient projections, Arizona state university.
Tibshirani, R. 1996. Regression Shrinkage and Selection via the Lasso, JRSS, Series B, 58(1):267?288.
Friedman, J., Hastie, T. and Tibshirani, R. 2007. Sparse inverse covariance estimation with the graphical lasso.
Biostatistics, 8(1):1?10.
Zou, H., Hastie, T. and Tibshirani, R. 2006. Sparse PCA, J. of Comp. and Graphical Statistics, 15(2), 262-286.
Qiao, Z., Zhou, L and Huang, J. 2006. Sparse LDA with applications to high dimensional low sample size data.
IAENG applied mathematics, 39(1).
Argyriou, A., Evgeniou, T. and Pontil, M. 2008. Convex multi-task feature learning. Machine Learning 73(3): 243?
272.
Huang, S., Li, J., et al. 2010. Learning Brain Connectivity of AD by Sparse Inverse Covariance Estimation,
NeuroImage, 50, 935-949.
Candes, E., Wakin, M. and Boyd, S. 2008. Enhancing sparsity by reweighted L1 minimization. Journal of Fourier
analysis and applications, 14(5), 877-905.
Mazumder, R.; Friedman, J. 2009. SparseNet: Coordinate Descent with Non-Convex Penalties. Manuscript.
Zhang, T. 2008. Multi-stage Convex Relaxation for Learning with Sparse Regularization. NIPS 2008.
Campbell, N. 1984. Canonical variate analysis ageneral formulation. Australian Jour of Stat 26, 86?96.
Hastie, T. and Tibshirani, R. 1994. Discriminant analysis by gaussian mixtures. Technical report. AT&T Bell Lab.
Kumar, N. and Andreou, G. 1998. Heteroscedastic discriminant analysis and reduced rank HMMs for improved
speech recognition. Speech Communication, 26 (4), 283-297.
Gasso, G., Rakotomamonjy, A. and Canu, S. 2009. Recovering sparse signals with non-convex penalties and DC
programming. IEEE Trans. Signal Processing 57( 12), 4686-4698.
Guo, J., Levina, E., Michailidis, G. and Zhu, J. 2011. Joint estimation of multiple graphical models. Biometrika 98(1)
1-15.
Bertsekas, D. 1982. Projected newton methods for optimization problems with simple constraints. SIAM J. Control
Optim 20, 221-246.
Clemmensen, L., Hastie, T., Witten, D. and Ersboll:, B. 2011. Sparse Discriminant Analysis. Technometrics (in press)
Friston, K.J., Ashburner, J. 1995. Spatial registration and normalization of images. HBM 2, 89?165.
Tzourio-Mazoyer, N., et al., 2002. Automated anatomical labelling of activations in SPM. NeuroImage 15, 273?289.
Bidzan, L. 2005. Vascular factors in dementia. Psychiatr. Pol. 39, 977-986.
9
| 4440 |@word multitask:4 determinant:1 mri:36 cingulate:1 hippocampus:6 nd:1 grey:1 simulation:6 seek:3 lobe:6 covariance:7 tr:3 liu:1 series:1 score:3 selecting:1 ours:1 document:1 outperforms:2 existing:4 bradley:1 current:2 com:1 comparing:1 surprising:2 optim:1 activation:1 yet:1 written:1 subsequent:1 enables:2 interpretable:1 medial:2 cingulum_post_l:1 discrimination:7 alone:2 v:2 asu:1 selected:2 metabolism:3 parameterization:2 slowing:1 item:2 positron:2 jagust:1 precuneus:2 provides:1 location:3 simpler:1 zhang:1 five:1 philiastides:1 supply:1 fps:5 initiative:1 prove:3 consists:1 sakamoto:1 combine:2 multimodality:1 manner:1 indeed:2 growing:2 multi:8 brain:53 electroencephalography:1 increasing:1 considering:1 estimating:1 underlying:2 matched:1 maximizes:1 biostatistics:1 what:1 argmin:4 interpreted:1 neurobiol:1 eigenvector:1 developed:3 informed:2 supplemental:2 finding:7 transformation:5 temporal:12 tackle:1 exactly:1 ro:1 classifier:2 demonstrates:1 biometrika:1 control:6 medical:1 producing:1 bertsekas:1 before:1 positive:4 engineering:2 treat:1 aging:2 irreversible:1 severely:3 consequence:1 analyzing:1 tempe:1 establishing:1 acta:2 heteroscedastic:1 mentioning:1 limited:1 hmms:1 neuropsychological:1 practical:1 testing:1 sullivan:1 procedure:4 cogn:1 pontil:1 area:4 neurodegenerative:2 physiology:1 composite:5 projection:1 boyd:1 bell:1 specificity:3 suggest:1 selection:9 operator:1 accumulation:1 equivalent:6 conventional:1 map:2 jieping:2 center:3 imposed:1 arterial:1 attention:1 convex:20 focused:1 resolution:5 identifying:3 disorder:2 importantly:1 borrow:1 population:1 coordinate:1 suppose:1 play:1 programming:6 us:1 trail:1 element:4 assure:1 trend:1 recognition:1 located:1 utilized:1 parahippocampus:1 cut:1 database:1 role:2 solved:2 capture:1 parameterize:1 fleisher:2 region:70 disease:50 substantial:1 mentioned:1 pol:1 ultimately:1 solving:1 segment:1 serve:1 eric:2 basis:1 po:1 joint:5 various:2 parietal_sup_r:1 sajda:1 fast:3 effective:4 detected:3 sc:1 labeling:2 apparent:1 slda:11 supplementary:2 solve:1 larger:2 precuneus_r:1 relax:1 otherwise:1 fatal:1 ability:3 statistic:1 radiology:1 jointly:2 highlighted:1 mood:1 advantage:5 propose:4 interaction:4 product:3 tu:4 relevant:2 neighboring:1 shrunken:3 flexibility:1 achieve:1 intuitive:1 description:1 jing:2 produce:2 comparative:1 adam:2 perfusion:4 derive:1 montreal:1 stat:1 measured:3 strong:2 implemented:1 recovering:1 involves:2 indicate:1 implies:1 rosales:1 australian:1 anatomy:1 modifying:1 subsequently:1 enable:1 material:3 suitability:1 investigation:1 biological:1 dendritic:1 im:1 helping:1 hold:1 therapy:1 considered:2 normal:5 great:4 mapping:2 clu:1 major:4 achieves:1 early:13 adopt:1 purpose:3 estimation:6 reiman:2 currently:1 individually:1 repetition:1 tool:1 minimization:1 gaussian:4 aim:2 zhou:1 shrinkage:3 earliest:1 derived:1 emission:2 focus:2 l1regularization:1 consistently:2 rank:1 likelihood:6 mainly:1 biomarker:1 indicates:1 industrial:1 ishii:2 contrast:1 explanatory:1 borrowing:2 selects:2 overall:3 classification:10 among:2 denoted:1 development:2 resonance:2 spatial:6 special:1 neuropathological:1 once:2 evgeniou:1 nicely:2 progressive:1 represents:2 anticipated:1 arborizations:1 fmri:8 future:1 report:3 primarily:1 pathological:1 praxis:1 simultaneously:1 preserve:5 argmax:1 friedman:2 technometrics:1 detection:1 interest:1 investigate:1 severe:2 mixture:2 accurate:1 encourage:1 capable:1 conduct:1 column:5 assertion:1 tp:4 retains:2 measuring:1 ada:8 rakotomamonjy:1 predictor:6 johnson:1 slep:1 too:2 reported:2 spatiotemporal:1 banner:3 eur:1 st:1 jour:1 sensitivity:3 siam:1 informatics:1 habeck:1 yao:1 connectivity:1 containing:1 huang:3 cognitive:13 li:2 potential:1 alteration:4 summarized:1 includes:2 matter:1 ad:53 tion:1 later:1 view:1 diagnose:1 closed:1 apparently:1 doing:1 portion:2 start:1 red:1 lab:1 parallel:2 candes:1 identifiability:1 contribution:1 sed:1 square:2 spin:1 accuracy:5 variance:2 characteristic:1 efficiently:1 yield:1 identify:1 correspond:1 weak:10 identification:9 bayesian:1 worth:2 comp:1 explain:2 simultaneous:1 ashburner:1 definition:1 against:2 regress:1 associated:2 proof:1 dataset:1 popular:1 ula:1 color:1 segmentation:1 actually:2 campbell:1 manuscript:1 scattering:1 methodology:1 response:1 improved:1 formulation:9 symptom:2 furthermore:1 stage:7 shuai:1 psychiatr:1 hand:2 horizontal:1 assessment:5 lack:1 overlapping:3 spm:2 lda:11 usa:2 effect:13 ye:3 normalized:4 true:3 contain:2 name:1 verify:1 regularization:22 spatially:1 lateralization:1 death:2 reweighted:1 kewei:2 dysfunction:1 inferior:1 fdg:2 generalized:1 hippocampal:5 tps:5 outline:1 cereb:1 performs:1 l1:9 oxygen:1 image:6 tro:1 variational:1 novel:2 ari:1 common:7 witten:1 functional:11 physical:2 phoenix:1 ji:1 cerebral:6 million:1 association:2 he:1 illness:1 belong:1 volume:5 significant:3 measurement:3 glucose:2 imposing:1 tuning:1 consistency:1 mathematics:1 similarly:1 canu:1 pathology:6 language:1 specification:1 cortex:7 posterior:2 multivariate:1 showed:1 confounding:2 perspective:3 irrelevant:4 nonconvex:1 life:1 exploited:2 preserving:2 captured:1 fortunately:1 seen:2 employed:1 maximize:1 ud:1 signal:2 ii:1 multiple:6 full:2 technical:1 levina:1 clinical:6 compensate:1 cross:1 hippocampus_l:1 y:1 ase:1 prediction:1 involving:1 regression:5 neuro:1 heterogeneous:1 patient:6 enhancing:1 normalization:1 aal:1 addition:1 morphometry:1 separately:2 decreased:1 source:23 modality:15 daunizeau:1 regional:2 strict:1 subject:7 tend:3 med:3 flow:2 effectiveness:1 clemmensen:1 alzheimer:9 structural:14 presence:1 abnormality:1 revealed:4 iii:1 noting:1 automated:5 affect:1 variate:1 li1:1 competing:1 identified:17 hastie:5 restrict:1 idea:1 lasso:2 michailidis:1 pca:1 metab:1 vascular:2 penalty:4 speech:2 cause:1 matlab:1 impairment:4 useful:1 clear:2 listed:1 amount:2 stein:2 extensively:2 tomography:2 category:1 reduced:5 outperform:1 canonical:1 estimated:2 overly:1 popularity:1 deteriorates:1 neuroscience:1 blue:1 diagnosis:7 anatomical:3 diagnostic:1 tibshirani:4 promise:1 affected:3 group:1 mazoyer:1 blood:3 preprocessed:1 registration:1 kept:1 imaging:5 schroeter:2 relaxation:1 sum:1 year:1 inverse:2 throughout:1 wu:2 separation:1 decision:1 capturing:1 abnormal:1 distinguish:2 aic:1 fold:1 arizona:2 mni:2 strength:3 scanned:1 constraint:3 aspect:5 fourier:1 kumar:1 fung:1 poor:1 jr:1 across:5 slightly:1 smaller:1 rev:1 making:1 explained:3 visualization:1 eventually:1 mechanism:1 hbm:1 needed:2 instrument:1 gaussians:1 apply:4 progression:1 away:1 magnetic:2 alternative:1 schmidt:1 original:1 denotes:1 remaining:1 include:3 ensure:1 top:1 graphical:3 opportunity:1 wakin:1 newton:1 atrophy:6 medicine:1 prof:2 especially:6 build:1 objective:2 hum:1 damage:2 primary:1 evolutionary:1 biodesign:1 subspace:2 collected:1 discriminant:5 pet:34 assuming:4 index:1 relationship:1 minimizing:1 nc:7 difficult:4 neuroimaging:12 unfortunately:1 trace:1 perform:2 conversion:1 neuron:2 observation:2 datasets:7 descent:1 t:1 parietal:3 ecml:1 severity:2 variability:4 communication:1 dc:6 vbm:1 overcoming:1 pair:3 toolbox:1 extensive:1 optimized:1 teresa:2 andreou:1 qiao:1 learned:1 nip:1 trans:1 address:1 able:3 sparsenet:1 below:1 pattern:2 fp:4 sparsity:2 including:4 green:1 memory:3 explanation:3 power:4 event:2 overlap:2 critical:1 friston:1 disturbance:1 nucl:3 zhu:1 improve:1 imply:1 picture:1 identifies:5 pvalue:4 gasso:1 naive:1 literature:9 l2:4 loss:2 generation:1 limitation:2 age:1 validation:2 clinic:2 downloaded:1 degree:5 affine:1 consistent:6 metabolic:1 row:2 penalized:2 repeat:1 institute:3 template:2 taking:2 characterizing:1 sparse:13 slice:1 cortical:1 numeric:1 preprocessing:2 projected:1 far:1 voxel:2 correlate:2 ignore:1 unreliable:1 reveals:1 symmetrical:1 braak:3 hlda:3 neurology:2 continuous:1 iterative:1 decomposes:2 table:5 additionally:1 nature:1 eeg:8 improving:1 mol:1 mazumder:1 du:2 zou:1 da:1 spread:1 intracellular:1 whole:2 qiu:1 verifies:1 repeated:1 complementary:4 xu:2 neuroimage:10 lie:1 perceptual:1 down:1 theorem:2 cog:7 specific:5 dementia:3 symbol:3 explored:1 svm:2 physiological:1 evidence:1 fusion:5 essential:1 false:2 importance:2 gained:1 entorhinal:3 labelling:1 chen:2 mildly:1 interrelated:1 likely:2 univariate:1 contained:1 neurological:1 vulnerable:1 springer:1 corresponds:2 wolf:1 relies:2 cti:1 goal:1 consequently:1 shared:4 change:1 included:2 typical:2 specifically:6 tzourio:1 miss:1 called:1 total:5 accepted:1 mci:5 rarely:1 select:1 highdimensional:1 sio:1 people:1 support:3 guo:1 alexander:1 relevance:1 frontal:2 avoiding:1 evaluate:1 argyriou:1 phenomenon:1 correlated:1 |
3,801 | 4,441 | Generalized Lasso based Approximation of Sparse
Coding for Visual Recognition
Nobuyuki Morioka
The University of New South Wales & NICTA
Sydney, Australia
[email protected]
Shin?ichi Satoh
National Institute of Informatics
Tokyo, Japan
[email protected]
Abstract
Sparse coding, a method of explaining sensory data with as few dictionary bases
as possible, has attracted much attention in computer vision. For visual object category recognition, `1 regularized sparse coding is combined with the spatial pyramid representation to obtain state-of-the-art performance. However, because of its
iterative optimization, applying sparse coding onto every local feature descriptor
extracted from an image database can become a major bottleneck. To overcome
this computational challenge, this paper presents ?Generalized Lasso based Approximation of Sparse coding? (GLAS). By representing the distribution of sparse
coefficients with slice transform, we fit a piece-wise linear mapping function with
the generalized lasso. We also propose an efficient post-refinement procedure to
perform mutual inhibition between bases which is essential for an overcomplete
setting. The experiments show that GLAS obtains a comparable performance to
`1 regularized sparse coding, yet achieves a significant speed up demonstrating its
effectiveness for large-scale visual recognition problems.
1
Introduction
Recently, sparse coding [3, 18] has attracted much attention in computer vision research. Its applications range from image denoising [23] to image segmentation [17] and image classification
[10, 24], achieving state-of-the-art results. Sparse coding interprets an input signal x ? RD?1 with
a sparse vector u ? RK?1 whose linear combination with an overcomplete set of K bases (i.e.,
D K), also known as dictionary B ? RD?K , reconstructs the input as precisely as possible. To
enforce sparseness on u, the `1 norm is a popular choice due to its computational convenience and its
interesting connection with the NP-hard `0 norm in compressed sensing [2]. Several efficient `1 regularized sparse coding algorithms have been proposed [4, 14] and are adopted in visual recognition
[10, 24]. In particular, Yang et al. [24] compute the spare codes of many local feature descriptors
with sparse coding. However, due to the `1 norm being non-smooth convex, the sparse coding algorithm needs to optimize iteratively until convergence. Therefore, the local feature descriptor coding
step becomes a major bottleneck for large-scale problems like visual recognition.
The goal of this paper is to achieve state-of-the-art performance on large-scale visual recognition
that is comparable to the work of Yang et al. [24], but with a significant improvement in efficiency.
To this end, we propose ?Generalized Lasso based Approximation of Sparse coding?, GLAS for
short. Specifically, we encode the distribution of each dimension in sparse codes with the slice
transform representation [9] and learn a piece-wise linear mapping function with the generalized
lasso to obtain the best fit [21] to approximate `1 regularized sparse coding. We further propose
an efficient post-refinement procedure to capture the dependency between overcomplete bases. The
effectiveness of our approach is demonstrated with several challenging object and scene category
datasets, showing a comparable performance to Yang et al. [24] and performing better than other
fast algorithms that obtain sparse codes [22]. While there have been several supervised dictionary
1
learning methods for sparse coding to obtain more discriminative sparse representations [16, 25],
they have not been evaluated on visual recognition with many object categories due to its computational challenges. Furthermore, Ranzato et al. [19] have empirically shown that unsupervised
learning of visual features can obtain a more general and effective representation. Therefore, in this
paper, we focus on learning a fast approximation of sparse coding in an unsupervised manner.
The paper is organized as follows: Section 2 reviews some related work including the linear spatial
pyramid combined with sparse coding and other fast algorithms to obtain sparse codes. Section 3
presents GLAS. This is followed by the experimental results on several challenging categorization
datasets in Section 4. Section 5 concludes the paper with discussion and future work.
2
2.1
Related Work
Linear Spatial Pyramid Matching Using Sparse Coding
This section reviews the linear spatial pyramid matching based on sparse coding by Yang et al. [24].
Given a collection of N local feature descriptors randomly sampled from training images X =
[x1 , x2 , . . . , xN ] ? RD?N , an over-complete dictionary B = [b1 , b2 , . . . , bK ] ? RD?K is learned
by
min
B,U
N
X
kxi ? Bui k22 + ?kui k1
s.t. kbk k22 ? 1, k = 1, 2, . . . , K.
(1)
i=1
The cost function above is a combination of the reconstruction error and the `1 sparsity penalty
which is controlled by ?. The `2 norm on each bk is constrained to be less than or equal to 1
to avoid a trival solution. Since both B and [u1 , u2 , . . . , uN ] are unknown a priori, an alternating
optimization technique is often used [14] to optimize over the two parameter sets.
Under the spatial pyramid matching framework, each image is divided into a set of sub-regions
r = [r1 , r2 , . . . , rR ]. For example, if 1?1, 2?2 and 4?4 partitions are used on an image, we have
21 sub-regions. Then, we compute the sparse solutions of all local feature descriptors, denoted as
Urj , appearing in each sub-region rj by
min kXrj ? BUrj k22 + ?kUrj k1 .
Urj
(2)
The sparse solutions are max pooled for each sub-region and concatenated with other sub-regions to
build a statistic of the image by
h = [max(|Ur1 |)> , max(|Ur2 |)> , . . . , max(|UrR |)> ]> ,
(3)
where max(.) is a function that finds the maximum value at each row of a matrix and returns a
column vector. Finally, a linear SVM is trained on a set of image statistics for classification.
The main advantage of using sparse coding is that state-of-the-art results can be achieved with a
simple linear classifier as reported in [24]. Compared to kernel-based methods, this dramatically
speeds up training and testing time of the classifier. However, the step of finding a sparse code for
each local descriptor with sparse coding now becomes a major bottleneck. Using the efficient sparse
coding algorithm based on feature-sign search [14], the time to compute the solution for one local
descriptor u is O(KZ) where Z is the number of non-zeros in u. This paper proposes an approximation method whose time complexity reduces to O(K). With the post-refinement procedure, its
time complexity is O(K + Z 2 ) which is still much lower than O(KZ).
2.2
Predictive Sparse Decomposition
Predictive sparse decomposition (PSD) described in [10, 11] is a feedforward network that applies a
non-linear mapping function on linearly transformed input data to match the optimal sparse coding
?i = Gg(Wxi , ?), where
solution as accurate as possible. Such feedfoward network is defined as: u
g(z, ?) denotes a non-linear parametric mapping function which can be of any form, but to name
a few there are hyperbolic tangent, tanh(z + ?) and soft shrinkage, sign(z) max(|z| ? ?, 0). The
function is applied to linearly transformed data Wxi and subsequently scaled by a diagonal matrix
2
G. Given training samples {xi }N
i=1 , the parameters can be estimated either jointly or separately from
the dictionary B. When learning jointly, we minimize the cost function given below:
min
B,G,W,?,U
N
X
kxi ? Bui k22 + ?kui k1 + ?kui ? Gg(Wxi , ?)k22 .
(4)
i=1
When learning separately, B and U are obtained with Eqn. (1) first. Then, other remaining parameters G, W and ? are estimated by solving the last term of Eqn. (4) only. Gregor and LeCun [7] have
later proposed a better, but iterative approximation scheme for `1 regularized sparse coding.
One downside of the parametric approach is its accuracy is largely dependent on how well its parametric function fits the target statistical distribution, as argued by Hel-Or and Shaked [9]. This paper
explores a non-parametric approach which can fit any distribution as long as data samples available
are representative. The advantage of our approach over the parametric approach is that we do not
need to seek an appropriate parametric function for each distribution. This is particularly useful in
visual recognition that uses multiple feature types, as it automatically estimates the function form
for each feature type from data. We demonstrate this with two different local descriptor types in our
experiments.
2.3
Locality-constrained Linear Coding
Another notable work that overcomes the bottleneck of the local descriptor coding step is localityconstrained linear coding (LLC) proposed by Wang et al. [22], a fast version of local coordinate
coding [26]. Given a local feature descriptor xi , LLC searches for M nearest dictionary bases of
each local descriptor xi and these nearest bases stacked in columns are denoted as B?i ? RD?M
where ?i indicates the index list of the bases. Then, the coefficients u?i ? RM ?1 whose linear
combination with B?i reconstructs xi is solved by:
min kxi ? B?i u?i k22
u?i
s.t. 1> u?i = 1.
(5)
This is the least squares problem which can be solved quite efficiently. The final sparse code ui is
obtained by setting its elements indexed at ?i to u?i . The time complexity of LLC is O(K + M 2 ).
This excludes the time required to find M nearest neighbours. While it is fast, the resulting sparse
solutions obtained are not as discriminative as the ones obtained by sparse coding. This may be due
to the fact that M is fixed across all local feature descriptors. Some descriptors may need more bases
for accurate representation and others may need less bases for more distinctiveness. In contrast, the
number of bases selected with our post-refinement procedure to handle the mutual inhibition is
different for each local descriptor.
3
Generalized Lasso based Approximation of Sparse Coding
This section describes GLAS. We first learn a dictionary from a collection of local feature descriptors
as given Eqn. (1). Then, based on slice transform representation, we fit a piece-wise linear mapping
function with the generalized lasso to approximate the optimal sparse solutions of the local feature
descriptors under `1 regularized sparse coding. Finally, we propose an efficient post-refinement
procedure to perform the mutual inhibition.
3.1
Slice Transform Representation
Slice transform representation is introduced as a way to discretize a function space so to fit a piecewise linear function for the purpose of image denoising by Hel-Or and Shaked [9]. This is later
adopted by Adler et al. [1] for single image super resolution. In this paper, we utilise the representation to approximate sparse coding to obtain sparse codes for local feature descriptor as fast as
possible.
Given a local descriptor x, we can linearly combine with B> to obtain z. For the moment, we just
consider one dimension of z denoted as z which is a real value and lies in a half open interval of
[a, b). The interval is divided into Q ? 1 equal-sized bins whose boundaries form a vector q =
[q1 , q2 , . . . , qQ ]> such that a = q1 < q2 ? ? ? < qQ = b.
3
1.2
0.8
1.2
Data
RLS
L1?SC
GLAS
1
0.8
1
0.8
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
?0.2
?0.5
?0.2
?0.5
?0.2
?0.5
0
0.5
1
Data
RLS
L1?SC
GLAS
0.6
u
0.6
u
0.6
1.2
Data
RLS
L1?SC
GLAS
u
1
0
0.5
1
0
0.5
z
z
z
(a)
(b)
(c)
1
Figure 1: Different approaches to fit a piece-wise linear mapping function. Regularized least squares
(RLS) in red (see Eqn. (8)). `1 -regularized sparse coding (L1-SC) in magenta (see Eqn. (9)). GLAS
in green (see Eqn. (10)). (a) All three methods achieving a good fit. (b) A case when L1-SC fails to
extrapolate well at the end and RLS tends to align itself to q in black. (c) A case when data samples
at around 0.25 are removed artificially to illustrate that RLS fails to interpolate as no neighoring
prior is used. In contrast, GLAS can both interpolate and extrapolate well in the case of missing or
noisy data.
The interval into which the value of z falls is expressed as: ?(z) = j if z ? [qj?1 , qj ), and its
z?q?(z)?1
corresponding residue is calculated by: r(z) = q?(z) ?q
.
?(z)?1
Based on the above, we can re-express z as
z
=
(1 ? r(z))q?(z)?1 + r(z)q?(z)
=
Sq (z)q,
(6)
where Sq (z) = [0, . . . , 0, 1 ? r(z), r(z), 0, . . . , 0].
If we now come back to the multivariate case of z = B> x, then we have the following: z =
[Sq (z1 )q, Sq (z2 )q, . . . , Sq (zK )q]> , where zk implies the k th dimension of z. Then, we replace the
boundary vector q with p = {p1 , p2 , . . . , pK } such that resulting vector approximates the optimal
sparse solution of x obtained by `1 regularized sparse coding as much as possible. This is written as
? = [Sq (z1 )p1 , Sq (z2 )p2 , . . . , Sq (zK )pK ]> .
u
(7)
Hel-Or and Shaked [9] have formulated the problem of learning each pk as regularized least squares
either independently in a transform domain or jointly in a spatial domain. Unlike their setting,
we have significantly large number of bases which makes joint optimization of all pk difficult.
Moreover, since we are interested in approximating the sparse solutions which are in the transform
domain, we learn each pk independently. Given N local descriptors X = [x1 , x2 , . . . , xN ] ? RD?N
and their corresponding sparse solutions U = [u1 , u2 , . . . , uN ] = [y1 , y2 , . . . , yK ]> ? RK?N
obtained with `1 regularized sparse coding, we have an optimization problem given as
min kyk ? Sk pk k22 + ?kq ? pk k22 ,
pk
(8)
where Sk = Sq (zk ). The regularization of the second term is essential to avoid singularity when
computing the inverse and its consequence is that pk is encouraged to align itself to q when not many
data samples are available. This might have been a reasonable prior for image denoising [9], but not
desirable for the purpose of approximating sparse coing, as we would like to suppress most of the
coefficients in u to zero. Figure 1 shows the distribution of one dimension of sparse coefficients z
obtained from a collection of SIFT descriptors and q does not look similar to the distribution. This
motivates us to look at the generalized lasso [21] as an alternative for obtaining a better fit of the
distribution of the coefficients.
3.2
Generalized Lasso
In the previous section, we have argued that regularized least squares stated in Eqn. (8) does not give
the desired result. Instead most intervals need to be set to zero. This naturally leads us to consider
`1 regularized sparse coding also known as the lasso which is formulated as:
min kyk ? Sk pk k22 + ?kpk k1 .
pk
4
(9)
However, the drawback of this is that the learnt piece-wise linear function may become unstable in
cases when training data is noisy or missing as illustrated in Figure 1 (b) and (c). It turns out `1
trend filtering [12], generally known as the generalized lasso [21], can overcome this problem. This
is expressed as
min kyk ? Sk pk k22 + ?kDpk k1 ,
(10)
pk
where D ? R
(Q?2)?Q
is referred to as a penalty matrix and defined as
?
?
?1
2 ?1
?1
2 ?1
?
?
?.
D=?
.
.
.
?
?
..
.. ..
?1
2
(11)
?1
To solve the above optimization problem, we can turn it into the sparse coding problem [21]. Since
? = [D; A] ?
D is not invertible, the key is to augment D with A ? R2?Q to build a square matrix D
Q?Q
? = Q and the rows of A are orthogonal to the rows of D. To satisfy such
R
such that rank(D)
? k
constraints, A can for example be set to [1, 2, . . . , Q; 2, 3, . . . , Q + 1]. If we let ? = [?1 ; ?2 ] = Dp
?1
?
where ?1 = Dpk and ?2 = Apk , then Sk pk = Sk D ? = Sk1 ?1 + Sk2 ?2 . After some substitutions,
?1 >
Sk2 (yk ? Sk1 ?1 ), given ?1 is solved already. Now,
we see that ?2 can be solved by: ?2 = (S>
k2 Sk2 )
to solve ?1 , we have the following sparse coding problem:
min k(I ? P)yk ? (I ? P)Sk1 ?1 k22 + ?k?1 k1 ,
?1
(12)
?1 >
Sk2 . Having computed both ?1 and ?2 , we can recover the solution of pk
where P = Sk2 (S>
k2 Sk2 )
?1
? ?. Further details can be found in [21].
by D
Given the learnt p, we can approximate sparse solution of x by Eqn. (7). However, explicitly computing Sq (z) and multiplying it by p is somewhat redundant. Thus, we can alternatively compute
? as follows:
each component of u
u
?k = (1 ? r(zk )) ? pk (?(zk ) ? 1) + r(zk ) ? pk (?(zk )),
(13)
whose time complexity becomes O(K). In Eqn. (13), since we are essentially using pk as a lookup
?.
table, the complexity is independent from Q. This is followed by `1 normalization on u
? can readily be used for the spatial max pooling as stated in Eqn. (3), it does not yet capture
While u
any ?explaining away? effect where the corresponding coefficients of correlated bases are mutually
inhibited to remove redundancy. This is because each pk is estimated independently in the transform
domain [9]. In the next section, we propose an efficient post-refinement technique to mutually inhibit
between the bases.
3.3
Capturing Dependency Between Bases
To handle the mutual inhibition between overcomplete bases, this section explains how to refine the
sparse codes by solving regularized least squares on a significantly small active basis set. Given a
? estimated with above method, we set the non-zero
local descriptor x and its initial sparse code u
components of the code to be active. By denoting a set of these active components as ?, we have
?? and B? which are the subsets of the sparse code and dictionary bases respectively. The goal is
u
?? denoted as ?v? such that B? v? reconstructs xi as accurately as
to compute the refined code of u
possible. We formulate this as regularised least squares given below:
?? k22 ,
min kx ? B? ?v? k22 + ?k?v? ? u
?
v?
(14)
where ? is the weight parameter of the regularisation. This is convex and has the following analytical
?1
solution: ?
v? = (B>
(B>
u? ).
? B? + ?I)
? x + ??
? is considered as a good
The intuition behind the above formulation is that the initial sparse code u
starting point for refinement to further reduce the reconstruction error by allowing redundant bases to
? is substantially
compete against each other. Empirically, the number of active components for each u
small compared to the whole basis set. Hence, a linear system to be solved becomes much smaller
5
Methods
15 Train
30 Train
Time (sec)
KM
55.5?1.2
63.0?1.2
0.06
Methods
15 Train
30 Train
Time (sec)
KM
60.1?1.3
63.0?1.2
0.05
SIFT (128 Dim.) [15]
LLC [22] PSD [11]
SC [24]
62.7?1.0 64.0?1.2 65.2?1.2
69.6?0.8 70.6?0.9 71.6?0.7
0.25
0.06
3.53
GLAS
64.4?1.2
71.6?1.0
0.15
GLAS+
65.1?1.1
72.3?0.7
0.23
Local Self-Similarity (30 Dim.) [20]
LLC [22] PSD [11]
SC [24]
GLAS
62.4?0.8 59.7?0.8 64.8?0.9 62.3?1.2
69.7?1.3 67.2?0.9 72.5?1.6 69.8?1.4
0.24
0.05
1.97
0.13
GLAS+
63.8?0.9
71.0?1.1
0.18
Table 1: Recognition accuracy on Caltech-101. The dictionary sizes for all methods are set to 1024.
We also report the time taken to process 1000 local descriptors for each method.
which is computationally cheap. We also make sure that we do not deviate too much from the initial
solution by introducing the regularization on ?v? . This refinement procedure may be similar to LLC
[22]. However, in our case, we do not preset the number of active bases and determine by non-zero
?. More importantly, we base our final solution on u
? and do not perform nearest
components of u
neighbor search. With this refinement procedure, the total time complexity becomes O(K + Z 2 ).
We refer GLAS with this post-refinement procedure as GLAS+.
4
Experimental Results
This section evaluates GLAS and GLAS+ on several challenging categorization datasets. To learn
the mapping function, we have used 50,000 local descriptors as data samples. The parameters Q,
? and ? are fixed to 10, 0.1 and 0.25 respectively for all experiments, unless otherwise stated. For
comparison, we have implemented methods discussed in Section 2. SC is our re-implementation
of Yang et al. [24]. LLC is locality-constrained linear coding proposed by Wang et al. [22]. The
number of nearest neighors to consider is set to 5. PSD is predictive sparse decomposition [11].
Shrinkage function is used as its parametric mapping function. We also include KM which builds
its codebook with k-means clustering and adopts hard-assignment as its local descriptor coding.
For all methods, exactly the same local feature descriptors, spatial max pooling technique and linear
SVM are used to only compare the difference between the local feature descriptor coding techniques.
As for the descriptors, SIFT [15] and Local Self-Similarity [20] are used. SIFT is a histogram of
gradient directions computed over an image patch - capturing appearance information. We have
sampled a 16?16 patch at every 8 pixel step. In contrast, Local Self-Similarity computes correlation
between a small image patch of interest and its surrounding region which captures the geometric
layout of a local region. Spatial max pooling with 1 ? 1, 2 ? 2 and 4 ? 4 image partitions is used.
The implementation is all done in MATLAB for fair comparison.
4.1
Caltech-101
The Caltech-101 dataset [5] consists of 9144 images which are divided into 101 object categories.
The images are scaled down to 300 ? 300 preserving their aspect ratios. We train with 15/30 images
per class and test with 15 images per class. The dictionary size of each method is set to 1024 for
both SIFT and Local Self-Similarity.
The results are averaged over eight random training and testing splits and are reported in Table
1. For SIFT, GLAS+ is consistently better than GLAS demonstrating the effectiveness of mutual
inhibition by the post-refinement procedure. Both GLAS and GLAS+ performs better than other
fast algorithms that produces sparse codes. In addition GLAS and GLAS+ performs competitively
against SC. In fact, GLAS+ is slightly better when 30 training images per class are used. While
sparse codes for both GLAS and GLAS+ are learned from the solutions of SC, the approximated
codes are not exactly the same as the ones of SC. Moreover, SC sometimes produces unstable codes
due to the non-smooth convex property of `1 norm as previously observed in [6]. In contrast, GLAS+
6
72
70
68
SC
GLAS
GLAS+
66
0
5
10
15
Q
(a)
20
25
74
Average Recognition
74
Average Recognition
Average Recognition
74
72
70
68
SC
GLAS
GLAS+
66
0
0.5
Alpha
(b)
1
72
70
68
66
64
62
0%
SC
RLS
GLAS
GLAS+
10% 20% 30%
% of Missing Data
40%
(c)
Figure 2: (a) Q, the number of bins to quantize the interval of each sparse code component. (b) ?,
the parameter that controls the weight of the norm used for the generalized lasso. (c) When some
data samples are missing GLAS is more robust than regularized least squares given in Eqn. (8).
approximates its sparse codes with a relatively smooth piece-wise linear mapping function learned
with the generalized lasso (note that the `1 norm penalizes on changes in the shape of the function)
and performs smooth post-refinement. We suspect these differences may be contributing to the
slightly better results of GLAS+ on this dataset.
Although PSD performs quite close to GLAS for SIFT, this is not the case for Local Self-Similarity.
GLAS outperforms PSD probably due to the distribution of sparse codes is not captured well by a
simple shrinkage function. Therefore, GLAS might be more effective for a wide range of distributions. This is useful for recognition using multiple feature types where speed is critical. GLAS
performs worse than SC, but GLAS+ closes the gap between GLAS and SC. We suspect that due to
Local Self-Similarity (30 dim.) being relatively low-dimensional than SIFT (128 dim.), the mutual
inhibition becomes more important. This might also explain why LLC has performed reasonably
well for this descriptor.
Table 1 also reports computational time taken to process 1000 local descriptors for each method.
GLAS and GLAS+ are slower than KM and PSD, but are slightly faster than LLC and significantly
faster than SC. This demonstrates the practical importance of our approach where competitive recognition results are achieved with fast computation.
Different values for Q, ? and ? are evaluated one parameter at a time. Figure 2 (a) shows the results
of different Q. The results are very stable after 10 bins. As sparse codes are computed by Eqn. (13),
the time complexity is not affected by what Q is chosen. Figure 2 (b) shows the results for different
? which look very stable. We also observe similar stability for ?.
We also validate if the generalized lasso given in Eqn. (10) is more robust than the regularized least
squares solution given in Eqn. (8) when some data samples are missing. When learning each qk ,
we artificially remove data samples from an interval centered around a randomly sampled point,
as also illustrated in Figure 1 (c). We evaluate with different numbers of data samples removed in
terms of percentages of the whole data sample set. The results are shown in Figure 2 (c) where the
performance of RLS significantly drops as the number of missing data is increased. However, both
GLAS and GLAS+ are not affected that much.
4.2
Caltech-256
Caltech-256 [8] contains 30,607 images and 256 object categories in total. Like Caltech-101, we
scale the images down to 300?300 preserving their aspect ratios. The results are averaged over
eight random training and testing splits and are reported in Table 2. We use 25 testing images per
class. This time, for SIFT, GLAS performs slightly worse than SC, but GLAS+ outperforms SC
probably due to the same argument given in the previous experiments on Caltech-101. For Local
Self-Similarity, results similar to Caltech-101 are obtained. The performance of PSD is close to KM
and is outperformed by GLAS, suggesting the inadequate fitting of sparse codes. LLC performs
slightly better than GLAS, but could not perform better than GLAS+. While SC performed the best,
the performance of GLAS+ is quite close to SC. We also plot a graph of the computational time
taken for each method with its achieved accuracy on SIFT and Local Self-Similarity in Figure 3 (a)
and (b) respectively.
7
Methods
15 Train
30 Train
KM
22.7?0.4
27.4?0.5
Methods
15 Train
30 Train
KM
23.7?0.4
28.5?0.4
SIFT (128 Dim.)
LLC [22] PSD [11]
28.1?0.5 30.4?0.6
34.0?0.6 36.3?0.5
[15]
SC [24]
30.7?0.4
36.8?0.4
GLAS
30.4?0.4
36.1?0.4
GLAS+
32.1?0.4
38.2?0.4
Local Self-Similarity (30 Dim.) [20]
LLC [22] PSD [11]
SC [24]
GLAS
26.3?0.5 24.3?0.6 28.7?0.5 26.0?0.5
31.9?0.5 29.3?0.5 34.7?0.4 31.2?0.5
GLAS+
27.6?0.6
33.3?0.5
Table 2: Recognition accuracy on Caltech-256. The dictionary sizes are all set to 2048 for SIFT and
1024 for Local Self-Similarity.
35
KM
LLC
PSD
SC
GLAS
GLAS+
30
25
0
2
4
Computational Time
(a)
6
81
Average Recognition
36
Average Recognition
Average Recognition
40
34
KM
LLC
PSD
SC
GLAS
GLAS+
32
30
28
0
0.5
1
1.5
Computational Time
(b)
2
80
79
KM
LLC
PSD
SC
GLAS
GLAS+
78
77
76
0
1
2
3
Computational Time
4
(c)
Figure 3: Plotting computational time vs. average recognition. (a) and (b) are SIFT and Local-Self
Similarity respectively evaluated on Caltech-256 with 30 training images. The dictionary size is set
to 2048. (c) is SIFT evaluated on 15 Scenes. The dictionary size is set to 1024.
4.3
15 Scenes
The 15 Scenes [13] dataset contains 4485 images divided into 15 scene classes ranging from indoor
scenes to outdoor scenes. 100 training images per class are used for training and the rest for testing.
We used SIFT to learn 1024 dictionary bases for each method. The results are plotted with computational time taken in Figure 3 (c). The result of GLAS+ (80.6%) are very similar to that of SC
(80.7%), yet the former is significantly faster. In summary, we show that our approach works well
on three different challenging datasets.
5
Conclusion
This paper has presented an approximation of `1 sparse coding based on the generalized lasso called
GLAS. This is further extended with the post-refinement procedure to handle mutual inhibition
between bases which are essential in an overcomplete setting. The experiments have shown competitive performance of GLAS against SC and achieved significant computational speed up. We have
also demonstrated that the effectiveness of GLAS on two local descriptor types, namely SIFT and
Local Self-Similarity where LLC and PSD only perform well on one type. GLAS is not restricted
to only approximate `1 sparse coding, but should be applicable to other variations of sparse coding
in general. For example, it may be interesting to try GLAS on Laplacian sparse coding [6] that
achieves smoother sparse codes than `1 sparse coding.
Acknowledgment
NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence
program.
8
References
[1] A. Adler, Y. Hel-Or, and M. Elad. A Shrinkage Learning Approach for Single Image SuperResolution with Overcomplete Representations. In ECCV, 2010.
[2] D.L. Donoho. For Most Large Underdetermined Systems of Linear Equations the Minimal L1norm Solution is also the Sparse Solution. Communications on Pure and Applied Mathematics,
2006.
[3] D.L. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via L1 minimization. PNAS, 100(5):2197?2202, 2003.
[4] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least Angle Regression. Annals of
Statistics, 2004.
[5] L. Fei-Fei, R. Fergus, and P. Perona. Learning Generative Visual Models from Few Training
Examples: An Incremental Bayesian Approach Tested on 101 Object Categories. In CVPR
Workshop, 2004.
[6] S. Gao, W. Tsang, L. Chia, and P. Zhao. Local Features Are Not Lonely - Laplacian Sparse
Coding for Image Classification. In CVPR, 2010.
[7] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In ICML, 2010.
[8] G. Griffin, A. Holub, and P. Perona. Caltech-256 Object Category Dataset. Technical Report,
California Institute of Technology, 2007.
[9] Y. Hel-Or and D. Shaked. A Discriminative Approach for Wavelet Denoising. TIP, 2008.
[10] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the Best Multi-Stage Architecture for Object Recognition. In ICCV, 2009.
[11] K Kavukcuoglu, M Ranzato, and Y Lecun. Fast inference in sparse coding algorithms with
applications to object recognition. Technical rRport CBLL-TR-2008-12-01, Computational
and Biological Learning Lab, Courant Institute, NYU, 2008.
[12] S.-J. Kim, K. Koh, S. Boyd, and D. Gorinevsky. L1 trend filtering. SIAM Review, 2009.
[13] S. Lazebnik, C. Schmid, and J. Ponce. Beyond Bags of Features: Spatial Pyramid Matching
for Recognizing Natural Scene Categories. In CVPR, 2006.
[14] H. Lee, A. Battle, R. Raina, and A.Y. Ng. Efficient sparse coding algorithms. In NIPS, 2006.
[15] D.G. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. IJCV, 2004.
[16] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Supervised Dictionary Learning. In
NIPS, 2008.
[17] J. Mairal, M. Leordeanu, F. Bach, M. Hebert, and J. Ponce. Discriminative Sparse Image
Models for Class-Specific Edge Detection and Image Interpretation. In ECCV, 2008.
[18] B.A. Olshausen and D.J. Field. Sparse coding with an overcomplete basis set: A strategy
employed by V1? Vision Research, 37, 1997.
[19] M. Ranzato, F.J. Huang, Y. Boureau, and Y. LeCun. Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition. In CVPR, 2007.
[20] E. Shechtman and M. Irani. Matching Local Self-Similarities across Image and Videos. In
CVPR, 2007.
[21] R. Tibshirani and J. Taylor. The Solution Path of the Generalized Lasso. The Annals of Statistics, 2010.
[22] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained Linear Coding
for Image Classification. In CVPR, 2010.
[23] J. Yang, J. Wright, T. Huang, and Y. Ma. Image Super-Resolution via Sparse Representation.
TIP, 2010.
[24] J. Yang, K. Yu, Y. Gong, and T.S. Huang. Linear spatial pyramid matching using sparse coding
for image classification. In CVPR, 2009.
[25] J. Yang, K. Yu, and T. Huang. Supervised Translation-Invariant Sparse Coding. In CVPR,
2010.
[26] K. Yu, T. Zhang, and Y. Gong. Nonlinear Learning using Local Coordinate Coding. In NIPS,
2009.
9
| 4441 |@word version:1 norm:7 open:1 km:10 seek:1 decomposition:3 q1:2 tr:1 shechtman:1 moment:1 substitution:1 contains:2 initial:3 nii:1 denoting:1 outperforms:2 z2:2 yet:3 attracted:2 written:1 readily:1 partition:2 shape:1 cheap:1 remove:2 drop:1 plot:1 v:1 half:1 selected:1 generative:1 kyk:3 short:1 feedfoward:1 cse:1 codebook:1 zhang:1 become:2 consists:1 wale:1 combine:1 fitting:1 ijcv:1 manner:1 excellence:1 p1:2 multi:1 automatically:1 becomes:6 moreover:2 superresolution:1 what:2 substantially:1 q2:2 finding:1 sapiro:1 every:2 exactly:2 classifier:2 scaled:2 rm:1 k2:2 control:1 demonstrates:1 local:43 tends:1 consequence:1 path:1 black:1 might:3 au:1 challenging:4 range:2 jarrett:1 averaged:2 practical:1 lecun:5 acknowledgment:1 testing:5 sq:10 procedure:10 shin:1 hyperbolic:1 significantly:5 matching:6 boyd:1 onto:1 convenience:1 close:4 applying:1 shaked:4 optimize:2 demonstrated:2 missing:6 layout:1 attention:2 starting:1 independently:3 convex:3 resolution:2 formulate:1 pure:1 importantly:1 stability:1 handle:3 coordinate:2 variation:1 qq:2 annals:2 target:1 hierarchy:1 us:1 regularised:1 element:1 trend:2 recognition:22 particularly:1 approximated:1 database:1 observed:1 wang:3 capture:3 solved:5 tsang:1 region:7 ranzato:4 removed:2 inhibit:1 yk:3 intuition:1 complexity:7 ui:1 sk1:3 trained:1 solving:2 predictive:3 distinctive:1 efficiency:1 basis:3 joint:1 represented:1 surrounding:1 stacked:1 train:9 fast:10 effective:2 sc:29 refined:1 whose:5 quite:3 elad:2 solve:2 cvpr:8 otherwise:1 compressed:1 statistic:4 transform:8 jointly:3 itself:2 final:2 noisy:2 advantage:2 rr:1 analytical:1 propose:5 reconstruction:2 achieve:1 validate:1 convergence:1 r1:1 produce:2 categorization:2 incremental:1 object:10 illustrate:1 ac:1 gong:3 nobuyuki:1 nearest:5 p2:2 sydney:1 implemented:1 come:1 implies:1 australian:2 direction:1 drawback:1 tokyo:1 subsequently:1 centered:1 australia:1 spare:1 bin:3 explains:1 argued:2 government:1 biological:1 singularity:1 underdetermined:1 around:2 considered:1 wright:1 mapping:9 nonorthogonal:1 major:3 dictionary:16 achieves:2 purpose:2 outperformed:1 applicable:1 bag:1 tanh:1 council:1 minimization:1 super:2 avoid:2 shrinkage:4 encode:1 focus:1 ponce:3 improvement:1 consistently:1 rank:1 indicates:1 contrast:4 kim:1 dim:6 inference:1 economy:1 dependent:1 perona:2 transformed:2 interested:1 pixel:1 classification:5 denoted:4 priori:1 augment:1 proposes:1 spatial:11 art:4 constrained:4 mutual:7 equal:2 field:1 having:1 ng:1 encouraged:1 look:3 unsupervised:3 rls:8 icml:1 yu:4 future:1 report:3 np:1 others:1 piecewise:1 inhibited:1 few:3 randomly:2 neighbour:1 national:1 interpolate:2 psd:14 detection:1 interest:1 behind:1 accurate:2 edge:1 orthogonal:1 unless:1 indexed:1 taylor:1 penalizes:1 re:2 desired:1 plotted:1 overcomplete:7 minimal:1 increased:1 column:2 soft:1 downside:1 assignment:1 cost:2 introducing:1 subset:1 kq:1 recognizing:1 inadequate:1 too:1 optimally:1 reported:3 dependency:2 learnt:2 kxi:3 combined:2 adler:2 explores:1 siam:1 gorinevsky:1 lee:1 informatics:1 invertible:1 tip:2 reconstructs:3 huang:5 worse:2 zhao:1 return:1 japan:1 suggesting:1 lookup:1 pooled:1 coding:55 b2:1 coefficient:6 sec:2 satisfy:1 notable:1 explicitly:1 piece:6 later:2 performed:2 lowe:1 try:1 lab:1 red:1 competitive:2 recover:1 minimize:1 square:9 accuracy:4 descriptor:30 largely:1 efficiently:1 qk:1 bayesian:1 kavukcuoglu:2 accurately:1 multiplying:1 explain:1 kpk:1 against:3 evaluates:1 naturally:1 sampled:3 dataset:4 popular:1 efron:1 segmentation:1 organized:1 holub:1 back:1 courant:1 supervised:3 zisserman:1 formulation:1 evaluated:4 done:1 furthermore:1 just:1 stage:1 until:1 correlation:1 eqn:14 nonlinear:1 l1norm:1 olshausen:1 name:1 effect:1 k22:13 y2:1 former:1 regularization:2 hence:1 alternating:1 irani:1 iteratively:1 illustrated:2 self:13 generalized:15 gg:2 complete:1 demonstrate:1 apk:1 performs:7 l1:7 image:35 wise:6 ranging:1 lazebnik:1 recently:1 empirically:2 jp:1 discussed:1 interpretation:1 approximates:2 significant:3 refer:1 rd:6 mathematics:1 satoh:2 centre:1 funded:1 stable:2 similarity:13 inhibition:7 base:21 align:2 multivariate:1 caltech:11 preserving:2 captured:1 somewhat:1 employed:1 determine:1 redundant:2 signal:1 smoother:1 multiple:2 desirable:1 rj:1 reduces:1 pnas:1 keypoints:1 smooth:4 technical:2 match:1 faster:3 bach:2 long:1 chia:1 divided:4 post:10 controlled:1 laplacian:2 regression:1 vision:3 essentially:1 histogram:1 kernel:1 normalization:1 trival:1 pyramid:7 achieved:4 sometimes:1 addition:1 residue:1 separately:2 interval:6 rest:1 unlike:1 sure:1 south:1 pooling:3 suspect:2 probably:2 effectiveness:4 yang:9 feedforward:1 split:2 fit:9 hastie:1 lasso:16 architecture:1 interprets:1 reduce:1 qj:2 bottleneck:4 penalty:2 matlab:1 hel:5 dramatically:1 useful:2 generally:1 category:8 percentage:1 sign:2 estimated:4 per:5 tibshirani:2 affected:2 express:1 ichi:1 key:1 redundancy:1 demonstrating:2 achieving:2 v1:1 graph:1 excludes:1 compete:1 inverse:1 angle:1 reasonable:1 patch:3 griffin:1 comparable:3 capturing:2 followed:2 refine:1 precisely:1 constraint:1 fei:2 scene:8 x2:2 u1:2 speed:4 aspect:2 min:9 argument:1 performing:1 relatively:2 department:1 combination:3 wxi:3 across:2 describes:1 smaller:1 slightly:5 battle:1 kbk:1 restricted:1 iccv:1 invariant:3 koh:1 taken:4 computationally:1 equation:1 mutually:2 previously:1 turn:2 end:2 adopted:2 available:2 competitively:1 eight:2 observe:1 away:1 enforce:1 appropriate:1 appearing:1 alternative:1 slower:1 denotes:1 remaining:1 include:1 clustering:1 concatenated:1 k1:6 build:3 lonely:1 approximating:2 gregor:2 already:1 parametric:7 strategy:1 diagonal:1 gradient:1 dp:1 unstable:2 nicta:2 code:23 index:1 ratio:2 difficult:1 stated:3 suppress:1 implementation:2 motivates:1 unsw:1 unknown:1 perform:5 allowing:1 discretize:1 datasets:4 extended:1 communication:2 y1:1 bk:2 introduced:1 namely:1 required:1 connection:1 z1:2 california:1 learned:3 extrapolate:2 nip:3 beyond:1 below:2 indoor:1 sparsity:1 challenge:2 program:1 including:1 max:9 green:1 video:1 critical:1 natural:1 regularized:16 dpk:1 raina:1 representing:1 scheme:1 technology:1 concludes:1 schmid:1 deviate:1 review:3 prior:2 geometric:1 tangent:1 ict:1 contributing:1 regularisation:1 sk2:6 urj:2 interesting:2 filtering:2 lv:1 digital:1 plotting:1 translation:1 row:3 eccv:2 summary:1 last:1 hebert:1 institute:3 explaining:2 distinctiveness:1 fall:1 neighbor:1 wide:1 johnstone:1 sparse:83 slice:5 overcome:2 dimension:4 xn:2 llc:16 boundary:2 calculated:1 kz:2 sensory:1 adopts:1 collection:3 refinement:13 computes:1 approximate:5 obtains:1 alpha:1 bui:2 overcomes:1 active:5 mairal:2 b1:1 discriminative:4 xi:5 alternatively:1 fergus:1 un:2 iterative:2 search:3 sk:6 why:1 table:6 learn:5 zk:8 robust:2 reasonably:1 obtaining:1 quantize:1 kui:3 artificially:2 domain:4 pk:19 main:1 linearly:3 whole:2 fair:1 x1:2 representative:1 referred:1 broadband:1 sub:5 fails:2 lie:1 outdoor:1 wavelet:1 rk:2 magenta:1 down:2 specific:1 showing:1 sift:16 sensing:1 r2:2 list:1 svm:2 nyu:1 essential:3 workshop:1 importance:1 sparseness:1 kx:1 boureau:1 gap:1 locality:3 appearance:1 gao:1 visual:10 expressed:2 u2:2 leordeanu:1 applies:1 utilise:1 extracted:1 ma:1 goal:2 sized:1 formulated:2 donoho:2 replace:1 hard:2 change:1 specifically:1 preset:1 denoising:4 total:2 called:1 experimental:2 evaluate:1 tested:1 correlated:1 |
3,802 | 4,442 | A rational model of causal induction
with continuous causes
Michael D. Pacer
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Thomas L. Griffiths
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
Tom [email protected]
Abstract
Rational models of causal induction have been successful in accounting for people?s judgments about causal relationships. However, these models have focused
on explaining inferences from discrete data of the kind that can be summarized in
a 2? 2 contingency table. This severely limits the scope of these models, since the
world often provides non-binary data. We develop a new rational model of causal
induction using continuous dimensions, which aims to diminish the gap between
empirical and theoretical approaches and real-world causal induction. This model
successfully predicts human judgments from previous studies better than models
of discrete causal inference, and outperforms several other plausible models of
causal induction with continuous causes in accounting for people?s inferences in
a new experiment.
1
Introduction
The problem of causal induction is central to science, and is something at which people are remarkably skilled, especially given its apparent difficulty. Understanding how people identify causal
relationships has consequently become a challenge taken up by many research programs in cognitive science. One of the most successful of these programs has used rational solutions to the abstract
problem of causal induction (in the spirit of [1, 2]) as a source of explanations for people?s inferences [3, 4, 5, 6]. However nearly all this research has assumed people have access to categorical
information about whether or not a cause or effect is present on a given trial ? the sort of information
that appears in a 2 ? 2 contingency table (see Figure 1(a)). Such an assumption may not be valid for
many of the causal relationships that we see in the world.
For a simple example of a situation in which a continuous cause is relevant, consider the case of
drinking coffee and wakefulness. Clearly, someone who drinks a beverage made by placing a single
drop of coffee in a gallon of water will experience no effects of wakefulness, as an insufficient
amount of the cause was present. Meanwhile, the diligent graduate student who imbibes upwards
of 10 pots of coffee a day will experience a great deal of wakefulness. How much coffee one drinks
is closely linked to whether wakefulness occurs ? merely knowing that some amount of coffee was
drunk is insufficient. And this problem is not relegated to those who wish to titrate their caffeination;
many causes exist along continuous dimensions, even if their effects do not (e.g., medicine dosage
and recovery, smoking and related death from cancer).1
The primary strategy that has been explored in previous work on causal induction from continuous
causes is one in which ambiguous examples are immediately categorized as indicating either the
presence or the absence of the cause. This approach, taken by Marsh and Ahn [9], provides a way to
1
We will focus on the case of continuous causes with binary outcomes. Learning the mapping between
continuous variables is known as function learning (e.g., [7, 8]).
1
Graph 0
?
(a)
???
???
???
???
(b)
?????????? ??????????
?????????? ??????????
B
Graph 1
C
E
?
B
C
E
Figure 1: Causal induction. (a) A 2 ? 2 contingency table. C is the cause, E the effect, with c+ and
c? indicating the presence and absence of the cause, similarly for e+ and e? . (b) Graphical models
showing possible causal relationships between cause C, effect E, and background B.
reduce continuous causes to the familiar binary case. In this paper, however, we argue that another
approach can be fruitful ? developing models that work directly with continuous values. We extend
the causal support model [4], originally defined for binary causes, to work with continuous-valued
causes. We then re-analyze the results of Marsh and Ahn [9], comparing people?s causal judgments
to predictions made by a number of rational models of causal induction with both discrete and continuous causes. The predictions made by the continuous models for these experiments perform well,
but are extremely similar, which led us to conduct a new experiment using stimuli that discriminate
among the different models. We show that continuous causal support provides a better account of
these data than the other models we consider.
2
Background
In this section we review previous work on rational models of causal induction, and summarize the
results of Marsh and Ahn [9] that we will use to evaluate different models later in the paper.
2.1
Rational models of causal induction
Rational models of causal induction have focused on the problem of determining the nature of the
relationship between a cause C and an effect E. These models can be divided into two groups.
One group focuses on estimating causal strength, such as ?P [10], causal power [3] and pCI [11],
which attempt to identify the degree of relationship between two variables. The other group focuses
on causal structure, such as causal support [4], which attempts to identify how certain one can be
that a causal relationship exists at all. The causal support model has proven effective in predicting
human judgments in several studies [4, 5, 6], and we use it as the starting point for our model of
causal induction with continuous causes. The causal support model can be most easily described
in the context of causal graphical models [12] (see Figure 1(b)). In particular, we consider two
graphical models, Graph 0 (G0 ) and Graph 1 (G1 ), and we want to determine the log posterior odds
(G1 |D)
of the models given some data D (i.e. log P
P (G0 |D) ). If we assume that both graphs are equally
likely a priori (i.e. P (G0 ) = P (G1 )), then this is equivalent to calculating the log Bayes factor
(D|G1 )
(log P
P (D|G0 ) ). In its most general form causal support is this calculation, described less technically
as identifying the evidence that D provides in favor of G1 over G0 [4].
In the particular case of causal inference over binary variables, we have three random variables representing the unknown background causes assumed to be always present (B), the possible cause
(C) and the effect (E) in question. In Graph 0 (G0 ) only B causes E, and how often it does so
is described by the weight parameter, w0 . Thus the probability of the event occurring under G0
is P (e+ |b+ , w0 ; G0 ) = w0 .2 Graph 1 (G1 ) allows C to potentially influence the probability of
E. In particular we say C also has an associated weight parameter w1 . How we parameterize
the relationship between B, C, and E determines the type of causal relationship we are considering. In order to capture generative causal relationships we use a noisy-OR parameterization for
P (e|b+ , c, w0 , w1 ; G1 ). That is, under G1 the probability of E occurring (assuming b+ ) is
P (e+ |b+ , c, w0 , w1 ; G1 ) = 1 ? (1 ? w0 )(1 ? w1 )c
(1)
2
Following [4], a superscript + indicates the presence of a variable, and a ? indicates its absence. We also
use c+ and c? to indicate that C takes the values 1 and 0 respectively.
2
a similar noisy-AND-NOT parameterization can be used for preventive causes [4], but we focus on
generative causes in this paper.
Having defined these graphical models, we can compute the corresponding likelihoods. The
data consists of the values of all n observed occurrences of cause and effects (i.e. D =
{(e1 , c1 ), (e2 , c2 ), . . . , (en , cn )}). Assuming trials are conditionally independent, we have
P (D|Gk ) =
n
?
i=1
P (ei |ci , b+ , w0 , w1 ; Gk )
(2)
where the noisy-OR parameterization is used, as in Equation 1. If we were concerned with estimating causal strength, we could use this likelihood to determine the estimates of w0 and w1 under G1
and G0 . However, if we want to compute a measure of causal structure we need to integrate over
all possible values of w0 and w1 , assuming prior distributions on w0 and w1 . In the original causal
support model [4], a uniform prior was used on w0 and w1 (for a more complex prior, see [6]).
Despite its success in modeling human judgments, this measure of causal support only works in a
limited set of cases ? those cases where data can be summarized in a 2?2 contingency table. In order
to address more complicated data sets (e.g. continuous-valued causes), significant modifications are
needed. These modifications can be made to the model or the data. We propose a modification to
the model, while others (e.g., [9]) have attempted to solve this by collapsing continuous data into
binary form. We discuss the consequences of the latter strategy in the next section.
2.2
Previous work on continuous-valued causal induction
Marsh and Ahn [9] note the insufficiencies of current models of causal induction that result from
considering only binary variables. Assuming that the data must be coerced into binary form, they
proposed two potential solutions to this problem, and ruled out one of these options. The first
solution is that people simply ignore ambiguous information, and only deal with instances that can
easily be categorized into ?cause? and ?not cause?. They reject this solution and instead opt for
the idea that learners ?spontaneously categorize ambiguous evidence into one of the four types of
evidence [used in contingency tables].? [9] (p. 4)
To test these claims, Marsh and Ahn conducted a series of experiments in which participants observe
visual stimuli (e.g., Figure 2 (a)) representing a particular value along a continuous dimension paired
with a (binary) event either occurring or not occurring. Participants were asked to use these images
to do two things. First, they were asked to estimate how many examples of each type of data they
had seen. Then, participants were asked ?to judge the strength between C and E on a scale from
0 (not a cause) to 100 (strongly causes)?. Marsh and Ahn used this second measure to show that
participants use ambiguous evidence when making causal judgments, refuting the idea that people
ignore the instances which cannot be easily categorized. Furthermore, they discovered that engaging
in causal inference changes participants? judgments of how many instances of each category they
saw. For example, when the ?ambiguous? stimuli were paired with the effect (e.g., condition AE of
Experiment 1, see Table 1), they found that participants claimed to have seen more examples of the
C category. This evidence that people?s frequency ratings were altered based on whether or not the
effect was paired with the ambiguous stimuli was used to dismiss the possibility that participants
were learning a continuous causal relationship.
While Marsh and Ahn demonstrate that causal induction altered how people assigned ambiguous
stimuli to categories, this does not necessarily mean that people were spontaneously categorizing
these stimuli and using that categorization information to make causal judgments. An alternative
account is that the boundary between the categories was ambiguous, and the evidence about the
relationship between cause and effect influenced where people placed this boundary. Previous research suggests that category structures should not always be thought of as fixed [13] and that causal
information can be used when learning category structures and meanings [14]. Our focus here is on
investigating how people might induce causal relationships that involve continuous variables, rather
than understanding their influence on categorization. However, the existence of a plausible alternative account of Marsh and Ahn?s results raises the possibility that we can understand their data
without assuming that people spontaneously categorize ambiguous stimuli in order to make causal
judgments. We will explore this possibility after introducing our rational model of causal induction.
3
(a)
Set 1
(b)
Set 2
Figure 2: Examples of continuous-valued stimuli. (a) Two sets of stimuli used by Marsh and Ahn
[9]. The extreme stimuli indicated the presence and absence of a cause, while the intermediate
stimulus was deemed ?ambiguous?. (b) A stimulus used in our experiments.
3
Defining causal support for continuous causes
Our goal in this section is to extend the rational analysis used to define the causal support model
[4] to causes with continuous values. Following the original model, we take causal support to be
the log likelihood ratio in favor of G1 over G0 , and assume that the causes combine in a noisy-OR.
However, rather than assuming that the influence of C is described by a single parameter w1 , we
instead define a function(f ) that maps c the value of C ? R into [0, 1]. For any such function
f? (?) : R ? [0, 1], with parameters ?, we then have the parameterization
P (e+ |b+ , c, w0 , ?; G1 ) = 1 ? (1 ? w0 )(1 ? f? (c))
(3)
where c is the (continuous) value of the cause C. The function f? (?), thus plays a very similar role
to that of the link function in generalized linear models.
We use a specific choice for f? (?): the probit function (the cumulative distribution function (CDF)
of the standard Normal distribution [15]), denoted ?(?). The influence of C is encoded in two
parameters, a bias parameter ? and a gain parameter ?. This gives the full parameterization
? c?? ?
P (e+ |b+ , c, w0 , ?, ?; G1 ) = 1 ? (1 ? w0 )(1 ? ?
)
?
where ? indicates the point where the effective strength of C will be 0.5, and ? determines the
sharpness of the transition in strength around this threshold. It is straightforward to show that the
original causal support model corresponds to a special case of this model when C only takes on a
single value when it is present.3 Under the assumption that there is no background rate of occurrence
(i.e., w0 = 0), this model is nearly equivalent to probit regression, which provides an excellent
comparison case for identifying the role that the noisy-OR plays in explaining people?s judgments.
To complete the specification of the model, we need to define prior distributions on the parameters.
For the results we report here w0 ? U (0, 1), as in [4], and we use the observed values c(n) to
produce the priors over ? and ?. We take ? ? U (cmin , cmax ), where cmin is the minimum of
c(n) ,and cmax is the maximum. This allows the prior on ? to be as uninformative as possible while
only sampling from the range of values over which inference could be reasonably made. The prior
on ? is a mixture distribution, where we draw a variable z from an inverse Wishart distribution with
?
one degree
of freedom and a mean corresponding to the sample variance, and then set ? to either z
?
or ? z with equal probability. Initial investigations suggest the model is relatively robust to prior
choice (e.g. varying the degrees of freedom in the Inverse Wishart does not substantially change
model predictions). Because of the complexity of analytically determining the joint likelihood, we
use Monte Carlo simulation to approximate the integral over these parameters.
3
In our continuous model, we assume the cause is always present but with varying strength. If we allow
for the possibility that the cause is absent, and that it has no influence on the effect
? in?such a situation, then we
obtain P (e+ |b+ , c? , w0 , ?, ?; G1 ) = w0 , as required. We then observe that ? c??
plays an analogous role
?
in Equation 3 to w1 in (1). To show equivalence, we need to show that it is possible for this quantity to have
a uniform prior when c = 1. Take ? = 1, and define a Gaussian prior on ? with mean 1 and unit variance.
c??
then follows a Gaussian distribution with mean 0 and unit variance. Since ?(?) is the CDF of the standard
?
?
?
Normal, the distribution of ? c??
is uniform on [0, 1].
?
4
Table 1: Contingencies and mean causal ratings from Marsh and Ahn [9]
Conditions
Contingencies
+
+
N (e , c )
N (e? , c? )
N (e? , c+ )
N (e+ , c? )
Causal Ratings:
Ex1:AE
?
Ex1:AE
Ex2:Zero
Ex2:Weak
Ex2:Moderate
Ex2:Perfect
38
18
2
2
18
38
2
2
10|10|10
10|10|10
10|10|10
10|10|10
79.2
78.5
28.3
33|26|26
13|13|13
7|7|7
7 |14| 7
36|32|32
16|16|16
4|4|4
4|8|4
40|40|40
20|20|20
0|0|0
0|0|0
36.2
60.6
81.0
Note: Ex1 and Ex2 refer to Experiments 1 and 2. Vertical bars in Ex2 contingencies separate the
three possible strategies (1|2|3) proposed in [9] for assimilating ambiguous stimuli.
We developed this rational model in order to be able to investigate how people engage in causal
inference in the case of continuous causes. We proceeded with this investigation in two ways. First,
in order to demonstrate the usefulness of considering any model of continuous causal inference,
we reanalyzed the causal ratings provided by participants in Marsh and Ahn?s [9] Experiments 1
and 2. Second, in order to better identify which model best predicts human judgments among the
continuous causal models, we conducted a new experiment designed to distinguish between the
various rational models.
4
Reanalyzing the results of Marsh and Ahn
We applied the continuous causal support model, together with several models of causal induction
from discrete data and alternative statistical models for causal induction from continuous data, to two
data sets from Marsh and Ahn [9]: the two conditions of Experiment 1 that contained ambiguous
? and the four conditions of Experiment 2. Contingencies and mean ratings for
stimuli (AE and AE),
these experiments are shown in Table 1.
4.1
Models
Discrete models. Following [4], we evaluated five models of causal induction from discrete data:
?P [10], causal power [3], pCI [11], (discrete) causal support [4], and the ?2 statistic. These
models were applied to contingencies derived by discretizing the continuous stimuli in three different
ways, following the strategies suggested by Marsh and Ahn: (1) if people believe in a generative
causal relationship, all ambiguous information should be incorporated into the cause count (i.e.
e+ , c+ ), (2) that people will classify information as being an example of e+ , c+ and e+ , c? in a
way that is proportional to the relationship they infer from the non-?ambiguous? examples, and (3)
that people increase e+ , c+ by the same number of ?ambiguous? cases as they would under (2), but
they do not similarly do this for e+ , c? . Because there are three potential sets of true event counts
under the assimilation hypothesis for Experiment 2, in order to analyze the assimilation hypothesis
under the best possible case, we will run the discrete models under all three possible methods of
assimilation. These three possible ways of assimilating the ambiguous cases are represented in
Table 1, as contingencies separated by vertical bars (?|?).
Continuous models. We also evaluated several models that consider the causal variable to be continuously valued. This includes the causal support model described in the previous section, as well
as several traditional models for statistical inference in cases where there is a relationship between
continuous and binary variables. Because they are usually used for hypothesis tests about whether
or not there is a relationship between a continuous and a binary variable, the two tests we use are
probit regression and a two-sample Student t-test. The former tests whether there is a relationship
between a continuous valued variable mapped to a binary variable, while the latter tests whether
there is a relationship between a binary variable mapped to a continuous variable.
Both continuous causal support and the discrete models have the property that with more evidence
there is for a cause the larger the positive score produced by the model. We want a similar property
to hold for the statistics we obtain from the alternative continuous models. If we treat the two5
Table 2: Correlations of Models Predictions to Human Data and ? values
Discrete Model Predictions
Model:
?P:
Power:
pCI:
Support :
?2 :
Possibility 1
Possibility 2
Possibility 3
r
r
r
-0.250
-0.250
-0.035
0.679
0.679
?
?4
2?10
2?10?4
1.100
154.950
1?10?5
-0.250
-0.250
-0.035
0.240
0.679
?
?4
2?10
2?10?4
1.100
2?10?4
1?10?5
-0.250
-0.250
0.239
0.679
0.679
Continuous Model Predictions
Model:
?
?4
2?10
2?10?4
16.142
77.350
1?10?5
C-Support:
Probit, |t| :
Probit, |?| :
t-test, |t|:
t-test, |?| :
r
?
0.966
0.984
0.876
0.976
0.976
0.475
2?10?4
0.320
1.132
1.132
sample t-test as a case of linear regression (with an indicator variable for whether or not the effect
occurred as the regressor), we obtain ? values for both the probit model and the t-test model. We
can treat these ? values as estimates of the strength of the relationship between the two variables.
Both methods also produce a t statistic, indicating the evidence that ? is different from zero. We
can treat these t values as alternative measures of causal structure. However, the sign of the ? and t
statistics is highly dependent on the particular way the data is represented, so we will use |?| and |t|
instead.
In their studies, Marsh and Ahn used four types of continuously varying stimuli that differed slightly
in the parameters used to create them. We have designed our models such that they are invariant
across specification of the dimension, as long as the specification accurately reflects the variance as
observed by participants. The parameters used to generate their stimuli, along with the frequencies
which each of these values occurred and the associated effects, can be directly plugged into the
models to produce predictions. We ran the model over each set of stimulus values, and averaged
these four predictions to obtain the final general predictions the means of which were compared to
the mean human judgments.
4.2
Results
Following [4], model predictions underwent a nonlinear transformation to accommodate nonlinearities in the response scale. This was the transformation y = sign(x) ? abs(x)? , where ? was chosen
to maximize the correlation (r) between the mean human ratings and mean model predictions across
the conditions. The results are shown in Table 2.
The re-analysis supports the idea that people were using continuous values in their causal judgments.
The best possible correlation achieved by any discrete model was discrete causal support and ?2 ,
r = .679; this is substantially worse than any of the continuous model correlations. On the other
hand, the models of continuous causal inference successfully captured much of the variation in
responses, with all the continuous models performing well (all r > .85). The Probit |t| model had
the best performance, r = .984, with Continuous causal support and the t-test models not far behind,
with r = .966 and r = .976, respectively.
5
Distinguishing between the continuous models
In the previous section, all of the models for continuous causal induction performed well. However,
the continuous models all made very similar predictions to one another. As a result, it is difficult to
distinguish which model of continuous causal induction people might be using. In order to better
determine which of these models most accurately captures human causal induction over continuous
dimensions, we need to construct data sets that will result in divergent predictions across the various
models.
Because of the noisy-OR parameterization of the generative model, (discrete) causal support predictions are sensitive to the base rate of occurrence while standard statistical tests (e.g., ?2 ) lack this
sensitivity despite being otherwise good approximations for the rational model [4]. The continuous
causal support model also uses a noisy-OR parameterization, meaning that it will also be sensitive
6
Data Set:3
1
Effect Value
Data Set:2
1
Effect Value
Effect Value
Effect Value
Data Set:1
1
Data Set:4
1
0.5
1
Cause Value
0.5
1
Cause Value
0
0
0.5
1
Cause Value
1
0
0
Effect Value
0
0
1
Effect Value
0
0
1
Effect Value
1
Effect Value
Effect Value
0
0
0
0
0 0.5 1
0 0.5 1
0 0.5 1
0 0.5 1
Cause Value Cause Value Cause Value Cause Value
Data Set:5
Data Set:6
Data Set:7
Data Set:8
Data Set:9
0.5
1
Cause Value
1
0
0
0.5
1
Cause Value
Figure 3: Datasets 1 - 9 for the current experiment. The horizontal axis denotes the value of the
cause, while the vertical axis denotes whether or not the event occurred.
to base rates in ways that standard statistical models will not. More generally, the assumption of a
particular form for generative causal relationships means that, for some data sets, flipping the values
of the effect (replacing a 0 with a 1 and vice versa) can result in different continuous causal support
values, though it leaves unchanged the predictions made by the standard methods.
We designed nine data sets to produce such differential predictions. Each data set consisted of a
series of fifty (e, c) pairs, where c ? {.02, .04, . . . , 1} and e ? 0, 1. The only differences between
the data sets were the functions defining the relationship between c and e. The first four data sets
(Figure 3, 1-4) were designed as follows: (1) for c < .6 then e ? Bern(.6) and for c ? .6 then
e = 1, (2) flipping the e from (1), (3) for c < .6 then e ? Bern(.6) and for c ? .6 then e = 0, and
(4) flipping the e from (3). The next five data sets (Figure 3, 5-9) were meant to be analogous to base
rate effects studied in [4]. There was no relationship between the value of c and e, but the rate at
which e = 1 differed between data sets, sampled from Bern(p) with p = .1, .25, .5, .75, .9, for data
sets 5-9, respectively. These datasets were then used as the basis for a new behavioral experiment.
5.1
Method
Participants. A total of 147 participants were recruited through the Amazon Mechanical Turk
web service and were paid $0.25 for their participation. Participants were only asked for one such
judgment, and were randomly sorted into one of the nine data set conditions we described above.
In order to account for any participants who did not read the instructions and consider the data, we
eliminated any participants who took less than sixty seconds to complete the study.4 Because of this
constraint, twelve participants were removed, leaving 135 participants for analysis. After removing
these participants, we were left with fifteen participants in each condition.
Procedure. Participants were told that they would be assisting a scientist in identifying ?whether
or not different levels of a chemical cause a type of bacteria to produce a protein?. They were told
that they would see an array of fifty images like the one in Figure 2(b), each of which denoted the
outcome of one batch of bacteria. Each of the images consisted of three elements: (1) a black bar
that denoted both how much of a chemical was in that batch of bacteria by how large it was with
relation to (2) a constant gray line, where a larger bar relative to this indicated that more of the
chemical was present, and (3) either a green checkmark or a red cross which denoted whether or
not the protein was found. Which images were included in the array were determined by the data
condition, and the images were sorted into a random order for each participant before being placed
in the array. Participants were told to take their time in analyzing the data, and then were asked to
rate ?whether they think the chemical causes the protein to be produced? on a 0-100 scale, where 0
4
Though we eliminated these subjects from the analysis here, not eliminating them does not change any of
the r scores by more than ?.02. In fact, including these participants increases the performance of our model
and decreases the performance of the alternative models.
7
Continuous Causal Support
100
80
80
Scaled Values
60
40
20
0
Probit Regression: Abs(T)
0
1 2 3 4 5 6 7 8 9
Data Set
Probit Regression: Abs(Beta)
60
40
80
r = .06
60
40
80
Independent T?test: Abs(Beta)
r = .06
40
20
20
0
0
0
1 2 3 4 5 6 7 8 9
Data Set
100
60
20
1 2 3 4 5 6 7 8 9
Data Set
1 2 3 4 5 6 7 8 9
Data Set
Independent T?test: Abs(T)
100
Scaled Values
r = .03
80
40
20
100
Scaled Values
Scaled Values
100
r = .74
60
Scaled Values
Scaled Values
Human Responses with Error Bars
100
80
r = .01
60
40
20
1 2 3 4 5 6 7 8 9
Data Set
0
1 2 3 4 5 6 7 8 9
Data Set
Figure 4: Experimental results, showing human judgments (error bars are one standard error), together with unscaled model predictions and corresponding correlations.
meant extremely unlikely and 100 meant extremely likely. This scale was designed to obtain scalar
estimates of degrees of belief in causal structure [6].
5.2
Results
As above, we use a power-law transformation to accommodate nonlinearities in response scale. Before discussing the results, we should note that the Figure 4 does not reflect the maximal correlation
between the transformed values of the probit and t-test models. The optimized correlation between
the mean human responses and mean model predictions for the probit |?| model and the t-test |t|
model were r = .060, (with, respectively, ? = 408.6 and ? = 164.15 ). The optimized correlation
for the probit |t| model was r = 0.028 with ? = 2 ? 10?4 . The optimized correlation for the t-test
|t| was r = 0.012, with ? = 12.2. We did not include the optimized graphs because the optimized
mean values for all models save the continuous causal support essentially became binary predictions, and as such they did not convey information about how the probit and t-test model predictions
differed from those made by continuous causal support. The values in Figure 4 reflect the the case
where no scaling occurred (i.e. where ? = 1).
The results are striking in that, though all the models performed well at predicting people?s judgments in the Marsh and Ahn studies, all but the continuous causal support model perform poorly
here. Continuous causal support outperforms every other model of continuous causal inference
(r = .744, with ? = 0.92). Still, it does seem to underestimate human causal ratings in data sets 8
and 9 (see Figure 4), which suggests further investigation of this phenomenon is needed.
6
Conclusion
We have proposed a new rational model of causal induction using continuous dimensions, continuous causal support, which aims to be a first step towards filling the gap between existing models
of causal induction and real-world cases of causal learning. This model successfully predicts human judgments found in previous work, and outperforms several other plausible models of causal
induction with continuous causes. Future work will hopefully continue to bring our models of
causal induction ever closer to addressing the problem of real-world causal induction, for example
by looking at how temporal information is used in conjunction with traditional statistical information. Consistent with a continuous view of causal induction, we suspect that more work in each of
these directions is likely to produce positive results.
Acknowledgements: This work was supported by a Berkeley Graduate Fellowship given to MP and grants IIS0845410 from the National Science Foundation and FA-9550-10-1-0232 from the Air Force Office of Scientific
Research to TLG.
8
References
[1] J. R. Anderson. The adaptive character of thought. Erlbaum, Hillsdale, NJ, 1990.
[2] D. Marr. Vision. W. H. Freeman, San Francisco, CA, 1982.
[3] P. Cheng. From covariation to causation: A causal power theory. Psychological Review,
104:367?405, 1997.
[4] T. L. Griffiths and J. B. Tenenbaum. Structure and strength in causal induction. Cognitive
Psychology, 51:354?384, 2005.
[5] T. L. Griffiths and J. B. Tenenbaum. Theory-based causal induction. Psychological review,
116(4):661, 2009.
[6] H. Lu, A. L. Yuille, M. Liljeholm, P. W. Cheng, and K. J. Holyoak. Bayesian generic priors
for causal learning. Psychological review, 115(4):955, 2008.
[7] J. R. Busemeyer, E. Byun, E. L. DeLosh, and M. A. McDaniel. Learning functional relations
based on experience with input-output pairs by humans and artificial neural networks. In
K. Lamberts and D. Shanks, editors, Concepts and Categories, pages 405?437. MIT Press,
Cambridge, 1997.
[8] T. L. Griffiths, C. G. Lucas, J. J. Williams, and M. L. Kalish. Modeling human function learning with gaussian processes. In Daphne Koller, Yoshua Bengio, Dale Schuurmans, and L?eon
Bottou, editors, Advances in Neural Information Processing Systems, volume 21, Cambridge,
MA, 2009. MIT Press.
[9] J. K. Marsh and W. Ahn. Spontaneous assimilation of continuous values and temporal information in causal induction. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 35(2):334, 2009.
[10] P. W. Cheng and L. R. Novick. A probabilistic contrast model of causal induction. Journal of
Personality and Social Psychology, 58:545?567, 1990.
[11] P. A. White. Making causal judgments from the proportion of confirming instances: the pCI
rule. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29:710?727,
2003.
[12] J. Pearl. Probabilistic reasoning in intelligent systems. Morgan Kaufmann, San Francisco, CA,
1988.
[13] M. R. Waldmann and Y. Hagmayer. Categories and causality: The neglected direction. Cognitive Psychology, 53(1):27?58, 2006.
[14] M. R. Waldmann, K. J. Holyoak, and A. Fratianne. Causal models and the acquisition of
category structure. Journal of Experimental Psychology: General, 124:181?206, 1995.
[15] C. I. Bliss. The calculation of the dosage-mortality curve.
22(1):134?167, 1935.
9
Annals of Applied Biology,
| 4442 |@word trial:2 proceeded:1 eliminating:1 proportion:1 instruction:1 holyoak:2 simulation:1 accounting:2 paid:1 fifteen:1 accommodate:2 initial:1 series:2 score:2 outperforms:3 existing:1 current:2 comparing:1 must:1 confirming:1 drop:1 designed:5 generative:5 leaf:1 parameterization:7 provides:5 daphne:1 five:2 along:3 skilled:1 c2:1 become:1 differential:1 beta:2 consists:1 combine:1 ex2:6 behavioral:1 freeman:1 considering:3 provided:1 estimating:2 kind:1 substantially:2 developed:1 transformation:3 nj:1 temporal:2 berkeley:7 every:1 scaled:6 unit:2 grant:1 positive:2 service:1 scientist:1 before:2 treat:3 limit:1 severely:1 consequence:1 despite:2 insufficiency:1 analyzing:1 might:2 black:1 studied:1 equivalence:1 suggests:2 someone:1 limited:1 graduate:2 range:1 averaged:1 spontaneously:3 busemeyer:1 procedure:1 empirical:1 reject:1 thought:2 induce:1 griffith:5 suggest:1 protein:3 cannot:1 context:1 influence:5 fruitful:1 equivalent:2 map:1 straightforward:1 williams:1 starting:1 focused:2 sharpness:1 amazon:1 recovery:1 immediately:1 identifying:3 rule:1 array:3 marr:1 variation:1 analogous:2 annals:1 spontaneous:1 play:3 engage:1 distinguishing:1 us:1 hypothesis:3 engaging:1 element:1 predicts:3 observed:3 role:3 capture:2 parameterize:1 decrease:1 removed:1 ran:1 complexity:1 asked:5 neglected:1 raise:1 technically:1 yuille:1 beverage:1 learner:1 basis:1 easily:3 joint:1 various:2 represented:2 separated:1 effective:2 monte:1 artificial:1 pci:4 outcome:2 apparent:1 encoded:1 larger:2 plausible:3 valued:6 say:1 solve:1 otherwise:1 favor:2 statistic:4 g1:14 think:1 noisy:7 superscript:1 final:1 kalish:1 took:1 propose:1 maximal:1 relevant:1 wakefulness:4 poorly:1 produce:6 categorization:2 perfect:1 develop:1 pot:1 indicate:1 judge:1 direction:2 closely:1 human:15 cmin:2 hillsdale:1 investigation:3 opt:1 drinking:1 hold:1 around:1 diminish:1 normal:2 great:1 scope:1 mapping:1 cognition:2 claim:1 waldmann:2 saw:1 sensitive:2 vice:1 create:1 successfully:3 reflects:1 mit:2 clearly:1 always:3 gaussian:3 aim:2 rather:2 varying:3 office:1 conjunction:1 categorizing:1 derived:1 focus:5 indicates:3 likelihood:4 contrast:1 inference:12 dependent:1 unlikely:1 relation:2 koller:1 relegated:1 transformed:1 among:2 reanalyzed:1 denoted:4 priori:1 lucas:1 special:1 equal:1 construct:1 having:1 sampling:1 eliminated:2 biology:1 placing:1 nearly:2 filling:1 future:1 others:1 stimulus:18 report:1 yoshua:1 intelligent:1 dosage:2 causation:1 randomly:1 national:1 familiar:1 attempt:2 freedom:2 ab:5 possibility:7 investigate:1 highly:1 mixture:1 extreme:1 sixty:1 behind:1 integral:1 closer:1 bacteria:3 experience:3 conduct:1 plugged:1 re:2 ruled:1 causal:105 theoretical:1 diligent:1 instance:4 classify:1 modeling:2 psychological:3 introducing:1 addressing:1 novick:1 uniform:3 usefulness:1 successful:2 conducted:2 erlbaum:1 twelve:1 sensitivity:1 told:3 probabilistic:2 regressor:1 michael:1 together:2 continuously:2 w1:11 central:1 reflect:2 mortality:1 wishart:2 collapsing:1 worse:1 cognitive:3 account:4 potential:2 nonlinearities:2 summarized:2 student:2 includes:1 bliss:1 mp:1 later:1 performed:2 view:1 linked:1 analyze:2 red:1 bayes:1 sort:1 complicated:1 option:1 participant:22 air:1 became:1 variance:4 who:5 kaufmann:1 judgment:18 identify:4 weak:1 bayesian:1 lambert:1 accurately:2 produced:2 lu:1 carlo:1 influenced:1 underestimate:1 acquisition:1 frequency:2 turk:1 e2:1 associated:2 rational:14 gain:1 sampled:1 covariation:1 appears:1 originally:1 day:1 tom:1 response:5 evaluated:2 though:3 strongly:1 anderson:1 furthermore:1 correlation:9 hand:1 dismiss:1 horizontal:1 ei:1 replacing:1 nonlinear:1 hopefully:1 lack:1 web:1 indicated:2 gray:1 scientific:1 believe:1 effect:25 byun:1 consisted:2 true:1 concept:1 former:1 analytically:1 assigned:1 chemical:4 read:1 death:1 deal:2 conditionally:1 ex1:3 white:1 ambiguous:16 generalized:1 complete:2 demonstrate:2 bring:1 upwards:1 reasoning:1 image:5 meaning:2 marsh:17 functional:1 volume:1 extend:2 occurred:4 significant:1 refer:1 versa:1 cambridge:2 similarly:2 had:2 access:1 specification:3 ahn:17 tlg:1 base:3 something:1 posterior:1 moderate:1 claimed:1 certain:1 delosh:1 binary:14 success:1 discretizing:1 discussing:1 continue:1 seen:2 minimum:1 captured:1 morgan:1 determine:3 maximize:1 assisting:1 full:1 infer:1 calculation:2 cross:1 long:1 divided:1 equally:1 e1:1 paired:3 prediction:20 coerced:1 regression:5 ae:5 essentially:1 vision:1 achieved:1 c1:1 background:4 remarkably:1 want:3 uninformative:1 fellowship:1 source:1 leaving:1 fifty:2 subject:1 recruited:1 suspect:1 thing:1 spirit:1 seem:1 odds:1 presence:4 intermediate:1 bengio:1 concerned:1 psychology:8 reduce:1 idea:3 cn:1 knowing:1 absent:1 whether:11 cause:53 nine:2 generally:1 involve:1 amount:2 tenenbaum:2 category:9 mcdaniel:1 generate:1 exist:1 sign:2 discrete:13 group:3 four:5 threshold:1 graph:8 merely:1 run:1 inverse:2 striking:1 draw:1 scaling:1 drink:2 shank:1 distinguish:2 cheng:3 strength:8 constraint:1 extremely:3 performing:1 relatively:1 department:2 developing:1 across:3 slightly:1 character:1 modification:3 making:2 invariant:1 taken:2 equation:2 discus:1 count:2 needed:2 refuting:1 observe:2 generic:1 preventive:1 occurrence:3 save:1 alternative:6 batch:2 existence:1 thomas:1 original:3 denotes:2 personality:1 include:1 graphical:4 cmax:2 calculating:1 medicine:1 eon:1 especially:1 coffee:5 unchanged:1 g0:10 question:1 quantity:1 occurs:1 flipping:3 strategy:4 primary:1 fa:1 traditional:2 link:1 separate:1 mapped:2 w0:19 argue:1 water:1 induction:34 assuming:6 relationship:23 insufficient:2 ratio:1 difficult:1 potentially:1 gk:2 unknown:1 perform:2 vertical:3 datasets:2 drunk:1 situation:2 defining:2 incorporated:1 ever:1 looking:1 discovered:1 rating:7 smoking:1 required:1 pair:2 mechanical:1 optimized:5 california:2 pearl:1 address:1 able:1 bar:6 suggested:1 usually:1 challenge:1 summarize:1 program:2 green:1 including:1 explanation:1 belief:1 memory:2 power:5 event:4 difficulty:1 force:1 predicting:2 indicator:1 participation:1 representing:2 altered:2 axis:2 deemed:1 categorical:1 review:4 understanding:2 prior:11 acknowledgement:1 determining:2 relative:1 law:1 probit:13 proportional:1 proven:1 foundation:1 contingency:11 integrate:1 degree:4 assimilating:2 consistent:1 editor:2 unscaled:1 cancer:1 placed:2 supported:1 bern:3 bias:1 allow:1 understand:1 explaining:2 underwent:1 boundary:2 dimension:6 curve:1 world:5 valid:1 cumulative:1 transition:1 dale:1 made:8 adaptive:1 san:2 far:1 social:1 approximate:1 ignore:2 investigating:1 assumed:2 francisco:2 continuous:63 table:11 nature:1 reasonably:1 robust:1 ca:4 schuurmans:1 excellent:1 complex:1 meanwhile:1 necessarily:1 bottou:1 did:3 convey:1 categorized:3 causality:1 en:1 differed:3 assimilation:4 wish:1 removing:1 specific:1 showing:2 explored:1 divergent:1 evidence:8 exists:1 liljeholm:1 ci:1 occurring:4 gap:2 led:1 simply:1 likely:3 explore:1 visual:1 contained:1 scalar:1 corresponds:1 determines:2 cdf:2 ma:1 goal:1 sorted:2 consequently:1 towards:1 absence:4 change:3 included:1 determined:1 total:1 discriminate:1 experimental:4 attempted:1 indicating:3 people:23 support:30 latter:2 meant:3 categorize:2 evaluate:1 phenomenon:1 |
3,803 | 4,443 | Algorithms for Hyper-Parameter Optimization
R?emi Bardenet
Laboratoire de Recherche en Informatique
Universit?e Paris-Sud
[email protected]
James Bergstra
The Rowland Institute
Harvard University
[email protected]
Yoshua Bengio
D?ept. d?Informatique et Recherche Op?erationelle
Universit?e de Montr?eal
[email protected]
Bal?azs K?egl
Linear Accelerator Laboratory
Universit?e Paris-Sud, CNRS
[email protected]
Abstract
Several recent advances to the state of the art in image classification benchmarks
have come from better configurations of existing techniques rather than novel approaches to feature learning. Traditionally, hyper-parameter optimization has been
the job of humans because they can be very efficient in regimes where only a few
trials are possible. Presently, computer clusters and GPU processors make it possible to run more trials and we show that algorithmic approaches can find better
results. We present hyper-parameter optimization results on tasks of training neural networks and deep belief networks (DBNs). We optimize hyper-parameters
using random search and two new greedy sequential methods based on the expected improvement criterion. Random search has been shown to be sufficiently
efficient for learning neural networks for several datasets, but we show it is unreliable for training DBNs. The sequential algorithms are applied to the most difficult
DBN learning problems from [1] and find significantly better results than the best
previously reported. This work contributes novel techniques for making response
surface models P (y|x) in which many elements of hyper-parameter assignment
(x) are known to be irrelevant given particular values of other elements.
1
Introduction
Models such as Deep Belief Networks (DBNs) [2], stacked denoising autoencoders [3], convolutional networks [4], as well as classifiers based on sophisticated feature extraction techniques
have from ten to perhaps fifty hyper-parameters, depending on how the experimenter chooses to
parametrize the model, and how many hyper-parameters the experimenter chooses to fix at a reasonable default. The difficulty of tuning these models makes published results difficult to reproduce
and extend, and makes even the original investigation of such methods more of an art than a science.
Recent results such as [5], [6], and [7] demonstrate that the challenge of hyper-parameter optimization in large and multilayer models is a direct impediment to scientific progress. These works
have advanced state of the art performance on image classification problems by more concerted
hyper-parameter optimization in simple algorithms, rather than by innovative modeling or machine
learning strategies. It would be wrong to conclude from a result such as [5] that feature learning
is useless. Instead, hyper-parameter optimization should be regarded as a formal outer loop in the
learning process. A learning algorithm, as a functional from data to classifier (taking classification
problems as an example), includes a budgeting choice of how many CPU cycles are to be spent
on hyper-parameter exploration, and how many CPU cycles are to be spent evaluating each hyperparameter choice (i.e. by tuning the regular parameters). The results of [5] and [7] suggest that
with current generation hardware such as large computer clusters and GPUs, the optimal alloca1
tion of CPU cycles includes more hyper-parameter exploration than has been typical in the machine
learning literature.
Hyper-parameter optimization is the problem of optimizing a loss function over a graph-structured
configuration space. In this work we restrict ourselves to tree-structured configuration spaces. Configuration spaces are tree-structured in the sense that some leaf variables (e.g. the number of hidden
units in the 2nd layer of a DBN) are only well-defined when node variables (e.g. a discrete choice of
how many layers to use) take particular values. Not only must a hyper-parameter optimization algorithm optimize over variables which are discrete, ordinal, and continuous, but it must simultaneously
choose which variables to optimize.
In this work we define a configuration space by a generative process for drawing valid samples.
Random search is the algorithm of drawing hyper-parameter assignments from that process and
evaluating them. Optimization algorithms work by identifying hyper-parameter assignments that
could have been drawn, and that appear promising on the basis of the loss function?s value at other
points. This paper makes two contributions: 1) Random search is competitive with the manual
optimization of DBNs in [1], and 2) Automatic sequential optimization outperforms both manual
and random search.
Section 2 covers sequential model-based optimization, and the expected improvement criterion. Section 3 introduces a Gaussian Process based hyper-parameter optimization algorithm. Section 4 introduces a second approach based on adaptive Parzen windows. Section 5 describes the problem of
DBN hyper-parameter optimization, and shows the efficiency of random search. Section 6 shows
the efficiency of sequential optimization on the two hardest datasets according to random search.
The paper concludes with discussion of results and concluding remarks in Section 7 and Section 8.
2
Sequential Model-based Global Optimization
Sequential Model-Based Global Optimization (SMBO) algorithms have been used in many applications where evaluation of the fitness function is expensive [8, 9]. In an application where the true
fitness function f : X ? R is costly to evaluate, model-based algorithms approximate f with a surrogate that is cheaper to evaluate. Typically the inner loop in an SMBO algorithm is the numerical
optimization of this surrogate, or some transformation of the surrogate. The point x? that maximizes
the surrogate (or its transformation) becomes the proposal for where the true function f should be
evaluated. This active-learning-like algorithm template is summarized in Figure 1. SMBO algorithms differ in what criterion they optimize to obtain x? given a model (or surrogate) of f , and in
they model f via observation history H.
SMBO f, M0 , T, S
1
2
3
4
5
6
7
H ? ?,
For t ? 1 to T ,
x? ? argminx S(x, Mt?1 ),
Evaluate f (x? ),
. Expensive step
H ? H ? (x? , f (x? )),
Fit a new model Mt to H.
return H
Figure 1: The pseudo-code of generic Sequential Model-Based Optimization.
The algorithms in this work optimize the criterion of Expected Improvement (EI) [10]. Other criteria have been suggested, such as Probability of Improvement and Expected Improvement [10],
minimizing the Conditional Entropy of the Minimizer [11], and the bandit-based criterion described
in [12]. We chose to use the EI criterion in our work because it is intuitive, and has been shown to
work well in a variety of settings. We leave the systematic exploration of improvement criteria for
future work. Expected improvement is the expectation under some model M of f : X ? RN that
f (x) will exceed (negatively) some threshold y ? :
Z ?
EIy? (x) :=
max(y ? ? y, 0)pM (y|x)dy.
(1)
??
2
The contribution of this work is two novel strategies for approximating f by modeling H: a hierarchical Gaussian Process and a tree-structured Parzen estimator. These are described in Section 3
and Section 4 respectively.
3
The Gaussian Process Approach (GP)
Gaussian Processes have long been recognized as a good method for modeling loss functions in
model-based optimization literature [13]. Gaussian Processes (GPs, [14]) are priors over functions
that are closed under sampling, which means that if the prior distribution of f is believed to be a GP
with mean 0 and kernel k, the conditional distribution of f knowing a sample H = (xi , f (xi ))ni=1
of its values is also a GP, whose mean and covariance function are analytically derivable. GPs with
generic mean functions can in principle be used, but it is simpler and sufficient for our purposes
to only consider zero mean processes. We do this by centering the function values in the considered data sets. Modelling e.g. linear trends in the GP mean leads to undesirable extrapolation in
unexplored regions during SMBO [15].
The above mentioned closedness property, along with the fact that GPs provide an assessment of
prediction uncertainty incorporating the effect of data scarcity, make the GP an elegant candidate
for both finding candidate x? (Figure 1, step 3) and fitting a model Mt (Figure 1, step 6). The runtime
of each iteration of the GP approach scales cubically in |H| and linearly in the number of variables
being optimized, however the expense of the function evaluations f (x? ) typically dominate even
this cubic cost.
3.1
Optimizing EI in the GP
We model f with a GP and set y ? to the best value found after observing H: y ? = min{f (xi ), 1 ?
i ? n}. The model pM in (1) is then the posterior GP knowing H. The EI function in (1) encapsulates a compromise between regions where the mean function is close to or better than y ? and
under-explored regions where the uncertainty is high.
EI functions are usually optimized with an exhaustive grid search over the input space, or a Latin
Hypercube search in higher dimensions. However, some information on the landscape of the EI criterion can be derived from simple computations [16]: 1) it is always non-negative and zero at training
points from D, 2) it inherits the smoothness of the kernel k, which is in practice often at least once
differentiable, and noticeably, 3) the EI criterion is likely to be highly multi-modal, especially as
the number of training points increases. The authors of [16] used the preceding remarks on the
landscape of EI to design an evolutionary algorithm with mixture search, specifically aimed at optimizing EI, that is shown to outperform exhaustive search for a given budget in EI evaluations. We
borrow here their approach and go one step further. We keep the Estimation of Distribution (EDA,
[17]) approach on the discrete part of our input space (categorical and discrete hyper-parameters),
where we sample candidate points according to binomial distributions, while we use the Covariance
Matrix Adaptation - Evolution Strategy (CMA-ES, [18]) for the remaining part of our input space
(continuous hyper-parameters). CMA-ES is a state-of-the-art gradient-free evolutionary algorithm
for optimization on continuous domains, which has been shown to outperform the Gaussian search
EDA. Notice that such a gradient-free approach allows non-differentiable kernels for the GP regression. We do not take on the use of mixtures in [16], but rather restart the local searches several times,
starting from promising places. The use of tesselations suggested by [16] is prohibitive here, as our
task often means working in more than 10 dimensions, thus we start each local search at the center
of mass of a simplex with vertices randomly picked among the training points.
Finally, we remark that all hyper-parameters are not relevant for each point. For example, a DBN
with only one hidden layer does not have parameters associated to a second or third layer. Thus it
is not enough to place one GP over the entire space of hyper-parameters. We chose to group the
hyper-parameters by common use in a tree-like fashion and place different independent GPs over
each group. As an example, for DBNs, this means placing one GP over common hyper-parameters,
including categorical parameters that indicate what are the conditional groups to consider, three
GPs on the parameters corresponding to each of the three layers, and a few 1-dimensional GPs over
individual conditional hyper-parameters, like ZCA energy (see Table 1 for DBN parameters).
3
4
Tree-structured Parzen Estimator Approach (TPE)
Anticipating that our hyper-parameter optimization tasks will mean high dimensions and small fitness evaluation budgets, we now turn to another modeling strategy and EI optimization scheme for
the SMBO algorithm. Whereas the Gaussian-process based approach modeled p(y|x) directly, this
strategy models p(x|y) and p(y).
Recall from the introduction that the configuration space X is described by a graph-structured generative process (e.g. first choose a number of DBN layers, then choose the parameters for each).
The tree-structured Parzen estimator (TPE) models p(x|y) by transforming that generative process,
replacing the distributions of the configuration prior with non-parametric densities. In the experimental section, we will see that the configuation space is described using uniform, log-uniform,
quantized log-uniform, and categorical variables. In these cases, the TPE algorithm makes the
following replacements: uniform ? truncated Gaussian mixture, log-uniform ? exponentiated
truncated Gaussian mixture, categorical ? re-weighted categorical. Using different observations
{x(1) , ..., x(k) } in the non-parametric densities, these substitutions represent a learning algorithm
that can produce a variety of densities over the configuration space X . The TPE defines p(x|y)
using two such densities:
`(x) if y < y ?
p(x|y) =
(2)
g(x) if y ? y ? ,
where `(x) is the density formed by using the observations {x(i) } such that corresponding loss
f (x(i) ) was less than y ? and g(x) is the density formed by using the remaining observations.
Whereas the GP-based approach favoured quite an aggressive y ? (typically less than the best observed loss), the TPE algorithm depends on a y ? that is larger than the best observed f (x) so that
some points can be used to form `(x). The TPE algorithm chooses y ? to be some quantile ? of the
observed y values, so that p(y < y ? ) = ?, but no specific model for p(y) is necessary. By maintaining sorted lists of observed variables in H, the runtime of each iteration of the TPE algorithm can
scale linearly in |H| and linearly in the number of variables (dimensions) being optimized.
4.1
Optimizing EI in the TPE algorithm
The parametrization of p(x, y) as p(y)p(x|y) in the TPE algorithm was chosen to facilitate the
optimization of EI.
Z y?
Z y?
p(x|y)p(y)
EIy? (x) =
(y ? ? y)p(y|x)dy =
(y ? ? y)
dy
(3)
p(x)
??
??
R
By construction, ? = p(y < y ? ) and p(x) = R p(x|y)p(y)dy = ?`(x) + (1 ? ?)g(x). Therefore
Z y?
Z y?
Z y?
?
?
?
(y ? y)p(x|y)p(y)dy = `(x)
(y ? y)p(y)dy = ?y `(x) ? `(x)
p(y)dy,
??
so that finally EIy? (x) =
??
R y?
?y ? `(x)?`(x) ??
p(y)dy
?`(x)+(1??)g(x)
??
?
?+
g(x)
`(x) (1
? ?)
?1
. This last expression
shows that to maximize improvement we would like points x with high probability under `(x)
and low probability under g(x). The tree-structured form of ` and g makes it easy to draw many
candidates according to ` and evaluate them according to g(x)/`(x). On each iteration, the algorithm
returns the candidate x? with the greatest EI.
4.2
Details of the Parzen Estimator
The models `(x) and g(x) are hierarchical processes involving discrete-valued and continuousvalued variables. The Adaptive Parzen Estimator yields a model over X by placing density in
the vicinity of K observations B = {x(1) , ..., x(K) } ? H. Each continuous hyper-parameter was
specified by a uniform prior over some interval (a, b), or a Gaussian, or a log-uniform distribution.
The TPE substitutes an equally-weighted mixture of that prior with Gaussians centered at each of
the x(i) ? B. The standard deviation of each Gaussian was set to the greater of the distances to the
left and right neighbor, but clipped to remain in a reasonable range. In the case of the uniform, the
points a and b were considered to be potential neighbors. For discrete variables, supposing the prior
was a vector of N probabilities pi , the posterior vector elements were proportional to N pi + Ci
where Ci counts the occurrences of choice i in B. The log-uniform hyper-parameters were treated
as uniforms in the log domain.
4
Table 1: Distribution over DBN hyper-parameters for random sampling. Options separated by ?or?
such as pre-processing (and including the random seed) are weighted equally. Symbol U means
uniform, N means Gaussian-distributed, and log U means uniformly distributed in the log-domain.
CD (also known as CD-1) stands for contrastive divergence, the algorithm used to initialize the layer
parameters of the DBN.
Whole model
Per-layer
Parameter
Prior
Parameter
Prior
pre-processing
raw or ZCA
n. hidden units
log U (128, 4096)
ZCA energy
U (.5, 1)
W init
U (?a, a) or N (0, a2 )
random seed
5 choices
a
algo A or B (see text)
classifier learn rate
log U (0.001, 10)
algo A coef
U (.2, 2)
classifier anneal start log U (100, 104 )
CD epochs
log U (1, 104 )
?7
?4
classifier `2 -penalty
0 or log U (10 , 10 )
CD learn rate
log U (10?4 , 1)
n. layers
1 to 3
CD anneal start log U (10, 104 )
batch size
20 or 100
CD sample data yes or no
5
Random Search for Hyper-Parameter Optimization in DBNs
One simple, but recent step toward formalizing hyper-parameter optimization is the use of random
search [5]. [19] showed that random search was much more efficient than grid search for optimizing
the parameters of one-layer neural network classifiers. In this section, we evaluate random search
for DBN optimization, compared with the sequential grid-assisted manual search carried out in [1].
We chose the prior listed in Table 1 to define the search space over DBN configurations. The details
of the datasets, the DBN model, and the greedy layer-wise training procedure based on CD are
provided in [1]. This prior corresponds to the search space of [1] except for the following differences:
(a) we allowed for ZCA pre-processing [20], (b) we allowed for each layer to have a different size,
(c) we allowed for each layer to have its own training parameters for CD, (d) we allowed for the
possibility of treating the continuous-valued data as either as Bernoulli means (more theoretically
correct) or Bernoulli samples (more typical) in the CD algorithm, and (e) we did not discretize the
possible values of real-valued hyper-parameters. These changes expand the hyper-parameter search
problem, while maintaining the original hyper-parameter search space as a subset of the expanded
search space.
The results of this preliminary random search are in Figure 2. Perhaps surprisingly, the result of
manual search can be reliably matched with 32 random trials for several datasets. The efficiency
of random search in this setting is explored further in [21]. Where random search results match
human performance, it is not clear from Figure 2 whether the reason is that it searched the original
space as efficiently, or that it searched a larger space where good performance is easier to find. But
the objection that random search is somehow cheating by searching a larger space is backward ?
the search space outlined in Table 1 is a natural description of the hyper-parameter optimization
problem, and the restrictions to that space by [1] were presumably made to simplify the search
problem and make it tractable for grid-search assisted manual search. Critically, both methods train
DBNs on the same datasets.
The results in Figure 2 indicate that hyper-parameter optimization is harder for some datasets. For
example, in the case of the ?MNIST rotated background images? dataset (MRBI), random sampling
appears to converge to a maximum relatively quickly (best models among experiments of 32 trials
show little variance in performance), but this plateau is lower than what was found by manual search.
In another dataset (convex), the random sampling procedure exceeds the performance of manual
search, but is slow to converge to any sort of plateau. There is considerable variance in generalization
when the best of 32 models is selected. This slow convergence indicates that better performance is
probably available, but we need to search the configuration space more efficiently to find it. The
remainder of this paper explores sequential optimization strategies for hyper-parameter optimization
for these two datasets: convex and MRBI.
6
Sequential Search for Hyper-Parameter Optimization in DBNs
We validated our GP approach of Section 3.1 by comparing with random sampling on the Boston
Housing dataset, a regression task with 506 points made of 13 scaled input variables and a scalar
5
mnist basic
1.0
mnist background images
0.9
mnist rotated background images
0.6
0.8
0.5
0.7
0.6
0.4
0.6
accuracy
accuracy
accuracy
0.8
0.5
0.4
0.3
0.1
0.1
0.0
1
2
4
8
16
32
64
0.0
128
1
experiment size (# trials)
2
4
8
16
32
64
0.0
128
0.7
0.6
0.5
1
2
4
8
16
32
64
128
0.4
experiment size (# trials)
32
64
128
0.65
0.60
0.55
0.55
0.50
16
0.70
0.8
accuracy
accuracy
accuracy
0.60
8
0.75
0.9
0.65
4
rectangles images
0.80
0.75
0.70
2
experiment size (# trials)
rectangles
1.0
0.80
0.45
1
experiment size (# trials)
convex
0.85
0.3
0.2
0.2
0.2
0.4
0.50
1
2
4
8
16
32
64
experiment size (# trials)
128
0.45
1
2
4
8
16
32
64
128
experiment size (# trials)
Figure 2: Deep Belief Network (DBN) performance according to random search. Random
search is used to explore up to 32 hyper-parameters (see Table 1). Results found using a
grid-search-assisted manual search over a similar domain with an average 41 trials are
given in green (1-layer DBN) and red (3-layer DBN). Each box-plot (for N = 1, 2, 4, ...)
shows the distribution of test set performance when the best model among N random trials
is selected. The datasets ?convex? and ?mnist rotated background images? are used for
more thorough hyper-parameter optimization.
regressed output. We trained a Multi-Layer Perceptron (MLP) with 10 hyper-parameters, including
learning rate, `1 and `2 penalties, size of hidden layer, number of iterations, whether a PCA preprocessing was to be applied, whose energy was the only conditional hyper-parameter [22]. Our
results are depicted in Figure 3. The first 30 iterations were made using random sampling, while
from the 30th on, we differentiated the random samples from the GP approach trained on the updated
history. The experiment was repeated 20 times. Although the number of points is particularly small
compared to the dimensionality, the surrogate modelling approach finds noticeably better points than
random, which supports the application of SMBO approaches to more ambitious tasks and datasets.
Applying the GP to the problem of optimizing DBN performance, we allowed 3 random restarts to
the CMA+ES algorithm per proposal x? , and up to 500 iterations of conjugate gradient method in
fitting the length scales of the GP. The squared exponential kernel [14] was used for every node.
The CMA-ES part of GPs dealt with boundaries using a penalty method, the binomial sampling part
dealt with it by nature. The GP algorithm was initialized with 30 randomly sampled points in H.
After 200 trials, the prediction of a point x? using this GP took around 150 seconds.
For the TPE-based algorithm, we chose ? = 0.15 and picked the best among 100 candidates drawn
from `(x) on each iteration as the proposal x? . After 200 trials, the prediction of a point x? using
this TPE algorithm took around 10 seconds. TPE was allowed to grow past the initial bounds used
with for random sampling in the course of optimization, whereas the GP and random search were
restricted to stay within the initial bounds throughout the course of optimization. The TPE algorithm
was also initialized with the same 30 randomly sampled points as were used to seed the GP.
6.1
Parallelizing Sequential Search
Both the GP and TPE approaches were actually run asynchronously in order to make use of multiple
compute nodes and to avoid wasting time waiting for trial evaluations to complete. For the GP approach, the so-called constant liar approach was used: each time a candidate point x? was proposed,
a fake fitness evaluation equal to the mean of the y?s within the training set D was assigned temporarily, until the evaluation completed and reported the actual loss f (x? ). For the TPE approach,
we simply ignored recently proposed points and relied on the stochasticity of draws from `(x) to
provide different candidates from one iteration to the next. The consequence of parallelization is
that each proposal x? is based on less feedback. This makes search less efficient, though faster in
terms of wall time.
6
26
Best value so far
24
22
TPE
GP
Manual
Random
20
18
16
14
0
10
20
30
40
convex
14.13 ?0.30 %
16.70 ? 0.32%
18.63 ? 0.34%
18.97 ? 0.34 %
MRBI
44.55 ?0.44%
47.08 ? 0.44%
47.39 ? 0.44%
50.52 ? 0.44%
50
Table 2: The test set classification error of
the best model found by each search algorithm on each problem. Each search algorithm was allowed up to 200 trials. The manual searches used 82 trials for convex and 27
trials MRBI.
Time
Figure 3: After time 30, GP optimizing
the MLP hyper-parameters on the Boston
Housing regression task. Best minimum
found so far every 5 iterations, against
time. Red = GP, Blue = Random. Shaded
areas = one-sigma error bars.
Runtime per trial was limited to 1 hour of GPU computation regardless of whether execution was on
a GTX 285, 470, 480, or 580. The difference in speed between the slowest and fastest machine was
roughly two-fold in theory, but the actual efficiency of computation depended also on the load of the
machine and the configuration of the problem (the relative speed of the different cards is different in
different hyper-parameter configurations). With the parallel evaluation of up to five proposals from
the GP and TPE algorithms, each experiment took about 24 hours of wall time using five GPUs.
7
Discussion
The trajectories (H) constructed by each algorithm up to 200 steps are illustrated in Figure 4, and
compared with random search and the manual search carried out in [1]. The generalization scores
of the best models found using these algorithms and others are listed in Table 2. On the convex
dataset (2-way classification), both algorithms converged to a validation score of 13% error. In
generalization, TPE?s best model had 14.1% error and GP?s best had 16.7%. TPE?s best was significantly better than both manual search (19%) and random search with 200 trials (17%). On the
MRBI dataset (10-way classification), random search was the worst performer (50% error), the GP
approach and manual search approximately tied (47% error), while the TPE algorithm found a new
best result (44% error). The models found by the TPE algorithm in particular are better than previously found ones on both datasets. The GP and TPE algorithms were slightly less efficient than
manual search: GP and EI identified performance on par with manual search within 80 trials, the
manual search of [1] used 82 trials for convex and 27 trials for MRBI.
There are several possible reasons for why the TPE approach outperformed the GP approach in
these two datasets. Perhaps the inverse factorization of p(x|y) is more accurate than the p(y|x) in
the Gaussian process. Perhaps, conversely, the exploration induced by the TPE?s lack of accuracy
turned out to be a good heuristic for search. Perhaps the hyper-parameters of the GP approach itself
were not set to correctly trade off exploitation and exploration in the DBN configuration space. More
empirical work is required to test these hypotheses. Critically though, all four SMBO runs matched
or exceeded both random search and a careful human-guided search, which are currently the state
of the art methods for hyper-parameter optimization.
The GP and TPE algorithms work well in both of these settings, but there are certainly settings
in which these algorithms, and in fact SMBO algorithm in general, would not be expected to do
well. Sequential optimization algorithms work by leveraging structure in observed (x, y) pairs. It is
possible for SMBO to be arbitrarily bad with a bad choice of p(y|x). It is also possible to be slower
than random sampling at finding a global optimum with a apparently good p(y|x), if it extracts
structure in H that leads only to a local optimum.
8
Conclusion
This paper has introduced two sequential hyper-parameter optimization algorithms, and shown them
to meet or exceed human performance and the performance of a brute-force random search in two
difficult hyper-parameter optimization tasks involving DBNs. We have relaxed standard constraints
(e.g. equal layer sizes at all layers) on the search space, and fall back on a more natural hyperparameter space of 32 variables (including both discrete and continuous variables) in which many
7
Dataset: convex
Dataset: mnist rotated background images
0.50
manual
99.5?th q.
GP
TPE
error (fraction incorrect)
error (fraction incorrect)
0.45
manual
99.5?th q.
GP
TPE
0.9
0.40
0.35
0.30
0.25
0.20
0.8
0.7
0.6
0.5
0.15
0
50
100
time (trials)
150
200
0
50
100
time (trials)
150
200
Figure 4: Efficiency of Gaussian Process-based (GP) and graphical model-based (TPE) sequential optimization algorithms on the task of optimizing the validation set performance
of a DBN of up to three layers on the convex task (left) and the MRBI task (right). The
dots are the elements of the trajectory H produced by each SMBO algorithm. The solid
coloured lines are the validation set accuracy of the best trial found before each point in
time. Both the TPE and GP algorithms make significant advances from their random initial conditions, and substantially outperform the manual and random search methods. A
95% confidence interval about the best validation means on the convex task extends 0.018
above and below each point, and on the MRBI task extends 0.021 above and below each
point. The solid black line is the test set accuracy obtained by domain experts using a
combination of grid search and manual search [1]. The dashed line is the 99.5% quantile of validation performance found among trials sampled from our prior distribution (see
Table 1), estimated from 457 and 361 random trials on the two datasets respectively.
variables are sometimes irrelevant, depending on the value of other parameters (e.g. the number of
layers). In this 32-dimensional search problem, the TPE algorithm presented here has uncovered new
best results on both of these datasets that are significantly better than what DBNs were previously
believed to achieve. Moreover, the GP and TPE algorithms are practical: the optimization for each
dataset was done in just 24 hours using five GPU processors. Although our results are only for
DBNs, our methods are quite general, and extend naturally to any hyper-parameter optimization
problem in which the hyper-parameters are drawn from a measurable set.
We hope that our work may spur researchers in the machine learning community to treat the hyperparameter optimization strategy as an interesting and important component of all learning algorithms. The question of ?How well does a DBN do on the convex task?? is not a fully specified,
empirically answerable question ? different approaches to hyper-parameter optimization will give
different answers. Algorithmic approaches to hyper-parameter optimization make machine learning
results easier to disseminate, reproduce, and transfer to other domains. The specific algorithms we
have presented here are also capable, at least in some cases, of finding better results than were previously known. Finally, powerful hyper-parameter optimization algorithms broaden the horizon of
models that can realistically be studied; researchers need not restrict themselves to systems of a few
variables that can readily be tuned by hand.
The TPE algorithm presented in this work, as well as parallel evaluation infrastructure, is available
as BSD-licensed free open-source software, which has been designed not only to reproduce the
results in this work, but also to facilitate the application of these and similar algorithms to other
hyper-parameter optimization problems.1
Acknowledgements
This work was supported by the National Science and Engineering Research Council of Canada,
Compute Canada, and by the ANR-2010-COSI-002 grant of the French National Research Agency.
GPU implementations of the DBN model were provided by Theano [23].
1
?Hyperopt? software package: https://github.com/jaberg/hyperopt
8
References
[1] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep
architectures on problems with many factors of variation. In ICML 2007, pages 473?480, 2007.
[2] G. E. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation,
18:1527?1554, 2006.
[3] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol. Stacked denoising autoencoders:
Learning useful representations in a deep network with a local denoising criterion. Machine Learning
Research, 11:3371?3408, 2010.
[4] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[5] Nicolas Pinto, David Doukhan, James J. DiCarlo, and David D. Cox. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS Comput Biol,
5(11):e1000579, 11 2009.
[6] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning.
NIPS Deep Learning and Unsupervised Feature Learning Workshop, 2010.
[7] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and vector
quantization. In Proceedings of the Twenty-eighth International Conference on Machine Learning (ICML11), 2010.
[8] F. Hutter. Automated Configuration of Algorithms for Solving Hard Computational Problems. PhD thesis,
University of British Columbia, 2009.
[9] F. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm
configuration. In LION-5, 2011. Extended version as UBC Tech report TR-2010-10.
[10] D.R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global
Optimization, 21:345?383, 2001.
[11] J. Villemonteix, E. Vazquez, and E. Walter. An informational approach to the global optimization of
expensive-to-evaluate functions. Journal of Global Optimization, 2006.
[12] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting:
No regret and experimental design. In ICML, 2010.
[13] J. Mockus, V. Tiesis, and A. Zilinskas. The application of Bayesian methods for seeking the extremum.
In L.C.W. Dixon and G.P. Szego, editors, Towards Global Optimization, volume 2, pages 117?129. North
Holland, New York, 1978.
[14] C.E. Rasmussen and C. Williams. Gaussian Processes for Machine Learning.
[15] D. Ginsbourger, D. Dupuy, A. Badea, L. Carraro, and O. Roustant. A note on the choice and the estimation
of kriging models for the analysis of deterministic computer experiments. 25:115?131, 2009.
[16] R. Bardenet and B. K?egl. Surrogating the surrogate: accelerating Gaussian Process optimization with
mixtures. In ICML, 2010.
[17] P. Larra?naga and J. Lozano, editors. Estimation of Distribution Algorithms: A New Tool for Evolutionary
Computation. Springer, 2001.
[18] N. Hansen. The CMA evolution strategy: a comparing review. In J.A. Lozano, P. Larranaga, I. Inza, and
E. Bengoetxea, editors, Towards a new evolutionary computation. Advances on estimation of distribution
algorithms, pages 75?102. Springer, 2006.
[19] J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. The Learning Workshop
(Snowbird), 2011.
[20] A. Hyv?arinen and E. Oja. Independent component analysis: Algorithms and applications. Neural Networks, 13(4?5):411?430, 2000.
[21] J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. JMLR, 2012. Accepted.
[22] C. Bishop. Neural networks for pattern recognition. 1995.
[23] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, and Y. Bengio.
Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010.
9
| 4443 |@word trial:28 exploitation:1 version:1 cox:1 nd:1 mockus:1 open:1 zilinskas:1 hyv:1 covariance:2 contrastive:1 tr:1 solid:2 harder:1 initial:3 configuration:15 substitution:1 score:2 uncovered:1 tuned:1 document:1 outperforms:1 existing:1 past:1 current:1 com:2 comparing:2 gmail:1 must:2 gpu:5 readily:1 numerical:1 treating:1 plot:1 designed:1 greedy:2 leaf:1 generative:3 prohibitive:1 tesselations:1 selected:2 discovering:1 parametrization:1 recherche:2 infrastructure:1 quantized:1 node:3 pascanu:1 math:1 simpler:1 five:3 along:1 constructed:1 direct:1 incorrect:2 fitting:2 concerted:1 theoretically:1 expected:6 roughly:1 themselves:1 multi:2 sud:2 inspired:1 continuousvalued:1 informational:1 cpu:4 little:1 window:1 actual:2 becomes:1 provided:2 matched:2 formalizing:1 maximizes:1 mass:1 mrbi:8 moreover:1 what:4 substantially:1 finding:3 transformation:2 extremum:1 wasting:1 pseudo:1 thorough:1 unexplored:1 every:2 runtime:3 universit:3 classifier:6 wrong:1 scaled:1 brute:1 unit:2 grant:1 appear:1 before:1 engineering:1 local:4 treat:1 depended:1 consequence:1 encoding:1 meet:1 approximately:1 black:1 chose:4 studied:1 conversely:1 shaded:1 fastest:1 limited:1 factorization:1 doukhan:1 range:1 practical:1 lecun:1 practice:1 regret:1 procedure:2 area:1 empirical:2 significantly:3 pre:3 confidence:1 regular:1 suggest:1 undesirable:1 close:1 applying:1 optimize:5 restriction:1 measurable:1 deterministic:1 center:1 go:1 regardless:1 starting:1 williams:1 convex:12 identifying:1 scipy:1 estimator:5 regarded:1 dominate:1 borrow:1 lamblin:1 biol:1 searching:1 traditionally:1 variation:1 updated:1 dbns:11 construction:1 gps:7 hypothesis:1 harvard:2 element:4 trend:1 expensive:3 eiy:3 particularly:1 recognition:2 observed:5 worst:1 region:3 cycle:3 plo:1 trade:1 kriging:1 mentioned:1 transforming:1 agency:1 trained:2 solving:1 algo:2 compromise:1 negatively:1 efficiency:5 basis:1 stacked:2 separated:1 informatique:2 train:1 fast:1 walter:1 hyper:56 exhaustive:2 whose:2 quite:2 larger:3 valued:3 heuristic:1 drawing:2 anr:1 cma:5 gp:39 itself:1 asynchronously:1 housing:2 differentiable:2 net:1 took:3 fr:1 adaptation:1 remainder:1 relevant:1 loop:2 turned:1 spur:1 achieve:1 realistically:1 intuitive:1 description:1 az:1 convergence:1 cluster:2 optimum:2 produce:1 leave:1 rotated:4 spent:2 depending:2 snowbird:1 op:1 progress:1 job:1 come:1 indicate:2 larochelle:2 differ:1 guided:1 correct:1 exploration:5 human:4 centered:1 noticeably:2 liar:1 arinen:1 fix:1 budgeting:1 generalization:3 investigation:1 preliminary:1 wall:2 assisted:3 sufficiently:1 considered:2 around:2 presumably:1 seed:3 algorithmic:2 m0:1 desjardins:1 a2:1 purpose:1 estimation:4 erationelle:1 outperformed:1 tiesis:1 currently:1 hansen:1 council:1 tool:1 weighted:3 hope:1 gaussian:17 always:1 rather:3 avoid:1 derived:1 inherits:1 validated:1 june:1 improvement:8 modelling:2 bernoulli:2 indicates:1 slowest:1 tech:1 lri:1 seeger:1 ept:1 zca:4 sense:1 cnrs:1 cubically:1 typically:3 entire:1 hidden:4 bandit:2 expand:1 reproduce:3 classification:6 among:5 art:5 initialize:1 equal:2 once:1 extraction:1 ng:2 sampling:9 placing:2 hardest:1 unsupervised:2 throughput:1 jones:1 icml:3 future:1 simplex:1 yoshua:2 others:1 simplify:1 report:1 few:3 randomly:3 oja:1 simultaneously:1 divergence:1 national:2 individual:1 cheaper:1 fitness:4 ourselves:1 argminx:1 replacement:1 montr:1 mlp:2 screening:1 highly:1 possibility:1 evaluation:10 certainly:1 introduces:2 mixture:6 accurate:1 capable:1 necessary:1 tree:7 initialized:2 re:1 hutter:2 eal:1 modeling:4 cover:1 assignment:3 licensed:1 cost:1 vertex:1 deviation:1 subset:1 uniform:11 osindero:1 closedness:1 reported:2 inza:1 answer:1 chooses:3 density:7 explores:1 international:1 szego:1 stay:1 systematic:1 off:1 lee:1 parzen:6 quickly:1 squared:1 thesis:1 choose:3 expert:1 return:2 aggressive:1 potential:1 de:2 bergstra:6 summarized:1 coding:1 includes:2 north:1 dixon:1 depends:1 tion:1 extrapolation:1 closed:1 picked:2 apparently:1 observing:1 red:2 competitive:1 start:3 option:1 sort:1 relied:1 parallel:2 compiler:1 contribution:2 formed:2 ni:1 accuracy:9 convolutional:1 variance:2 efficiently:2 yield:1 landscape:2 yes:1 dealt:2 raw:1 vincent:1 bayesian:1 critically:2 produced:1 trajectory:2 researcher:2 vazquez:1 published:1 processor:2 history:2 converged:1 plateau:2 eda:2 coef:1 manual:20 centering:1 against:1 energy:3 villemonteix:1 james:2 naturally:1 associated:1 sampled:3 experimenter:2 dataset:8 recall:1 dimensionality:1 sophisticated:1 anticipating:1 actually:1 back:1 appears:1 exceeded:1 higher:1 restarts:1 response:2 modal:1 evaluated:1 box:1 though:2 done:1 cosi:1 just:1 autoencoders:2 until:1 working:1 hand:1 ei:15 replacing:1 assessment:1 lack:1 somehow:1 french:1 defines:1 perhaps:5 scientific:2 facilitate:2 effect:1 brown:1 true:2 gtx:1 evolution:2 analytically:1 vicinity:1 assigned:1 lozano:2 laboratory:1 illustrated:1 during:1 bal:1 criterion:11 complete:1 demonstrate:1 image:8 wise:1 novel:3 recently:1 umontreal:1 common:2 functional:1 mt:3 empirically:1 volume:1 extend:2 significant:1 smoothness:1 tuning:2 automatic:1 dbn:19 pm:2 grid:6 outlined:1 stochasticity:1 had:2 dot:1 surface:2 hyperopt:2 posterior:2 own:1 recent:3 showed:1 optimizing:8 irrelevant:2 balazs:1 arbitrarily:1 minimum:1 greater:1 relaxed:1 preceding:1 performer:1 recognized:1 converge:2 maximize:1 dashed:1 multiple:1 exceeds:1 match:1 faster:1 believed:2 long:1 equally:2 prediction:3 involving:2 regression:3 basic:1 multilayer:1 expectation:1 iteration:9 kernel:4 represent:1 sometimes:1 proposal:5 whereas:3 background:5 krause:1 interval:2 objection:1 laboratoire:1 grow:1 source:1 fifty:1 parallelization:1 breuleux:1 probably:1 induced:1 supposing:1 elegant:1 leveraging:1 exceed:2 bengio:8 latin:1 enough:1 easy:1 variety:2 automated:1 fit:1 architecture:1 restrict:2 identified:1 impediment:1 inner:1 knowing:2 haffner:1 whether:3 expression:2 pca:1 accelerating:1 penalty:3 york:1 remark:3 deep:7 ignored:1 useful:1 fake:1 clear:1 aimed:1 listed:2 ten:1 hardware:1 http:1 outperform:3 coates:2 notice:1 estimated:1 per:3 correctly:1 blue:1 discrete:7 hyperparameter:3 waiting:1 group:3 four:1 threshold:1 drawn:3 bardenet:3 backward:1 rectangle:2 graph:2 fraction:2 run:3 inverse:1 package:1 uncertainty:2 powerful:1 place:3 clipped:1 reasonable:2 throughout:1 extends:2 draw:2 dy:8 layer:22 bound:2 courville:1 fold:1 constraint:1 software:2 regressed:1 emi:1 speed:2 innovative:1 concluding:1 min:1 expanded:1 relatively:1 gpus:2 structured:8 according:5 combination:1 bsd:1 conjugate:1 describes:1 remain:1 slightly:1 kakade:1 making:1 encapsulates:1 biologically:1 presently:1 restricted:1 theano:2 previously:4 turn:1 count:1 ordinal:1 tractable:1 parametrize:1 gaussians:1 available:2 hierarchical:2 generic:2 differentiated:1 occurrence:1 batch:1 slower:1 original:3 substitute:1 binomial:2 remaining:2 broaden:1 completed:1 graphical:1 maintaining:2 quantile:2 especially:1 approximating:1 hypercube:1 seeking:1 question:2 strategy:8 costly:1 parametric:2 surrogate:7 evolutionary:4 gradient:4 distance:1 card:1 restart:1 outer:1 lajoie:1 toward:1 reason:2 code:1 length:1 useless:1 modeled:1 manzagol:1 dicarlo:1 minimizing:1 difficult:3 taxonomy:1 expense:1 sigma:1 negative:1 design:2 reliably:1 ambitious:1 implementation:1 twenty:1 teh:1 discretize:1 observation:5 datasets:13 benchmark:1 november:1 kegl:1 truncated:2 hinton:1 extended:1 rn:1 parallelizing:1 community:1 canada:2 carraro:1 introduced:1 david:2 cheating:1 paris:2 specified:2 required:1 optimized:3 pair:1 hour:3 nip:1 suggested:2 bar:1 usually:1 below:2 lion:1 eighth:1 pattern:1 regime:1 challenge:1 max:1 including:4 green:1 belief:4 greatest:1 difficulty:1 treated:1 natural:2 force:1 advanced:1 scheme:1 github:1 concludes:1 carried:2 categorical:5 extract:1 columbia:1 text:1 prior:11 literature:2 epoch:1 coloured:1 acknowledgement:1 review:1 python:1 relative:1 loss:6 par:1 fully:1 roustant:1 accelerator:1 generation:1 proportional:1 interesting:1 versus:1 validation:5 sufficient:1 principle:1 editor:3 pi:2 cd:9 course:2 surprisingly:1 last:1 free:3 supported:1 rasmussen:1 formal:1 exponentiated:1 perceptron:1 institute:1 neighbor:2 template:1 taking:1 fall:1 sparse:1 distributed:2 boundary:1 default:1 dimension:4 evaluating:2 valid:1 stand:1 feedback:1 author:1 made:3 adaptive:2 preprocessing:1 answerable:1 ginsbourger:1 far:2 rowland:2 erhan:1 approximate:1 derivable:1 unreliable:1 keep:1 global:8 active:1 conclude:1 xi:3 search:68 continuous:6 icml11:1 why:1 table:8 promising:2 nature:1 learn:2 transfer:1 ca:1 nicolas:1 init:1 contributes:1 bottou:1 anneal:2 domain:6 did:1 linearly:3 whole:1 turian:1 allowed:7 repeated:1 en:1 cubic:1 fashion:1 slow:2 favoured:1 exponential:1 comput:1 candidate:8 tied:1 jmlr:1 third:1 british:1 load:1 specific:2 bad:2 bishop:1 bastien:1 symbol:1 explored:2 list:1 incorporating:1 workshop:2 mnist:6 quantization:1 sequential:16 importance:1 ci:2 phd:1 execution:1 budget:2 egl:2 horizon:1 easier:2 boston:2 entropy:1 depicted:1 hoos:1 simply:1 likely:1 explore:1 visual:1 temporarily:1 scalar:1 disseminate:1 pinto:1 holland:1 springer:2 corresponds:1 minimizer:1 leyton:1 ubc:1 conditional:5 sorted:1 careful:1 towards:2 considerable:1 change:1 hard:1 typical:2 specifically:1 uniformly:1 except:1 denoising:3 called:1 accepted:1 e:4 experimental:2 tpe:33 searched:2 support:1 scarcity:1 evaluate:6 srinivas:1 |
3,804 | 4,444 | Algorithms and hardness results
for parallel large margin learning
Rocco A. Servedio
Columbia University
[email protected]
Philip M. Long
Google
[email protected]
Abstract
We study the fundamental problem of learning an unknown large-margin halfspace in the context of parallel computation.
Our main positive result is a parallel algorithm for learning a large-margin halfspace that is based on interior point methods from convex optimization and fast
parallel algorithms for matrix computations. We show that this algorithm learns
an unknown ?-margin halfspace over n dimensions using poly(n, 1/?) processors
?
and runs in time O(1/?)
+ O(log n). In contrast, naive parallel algorithms that
learn a ?-margin halfspace in time that depends polylogarithmically on n have
?(1/? 2 ) runtime dependence on ?.
Our main negative result deals with boosting, which is a standard approach to
learning large-margin halfspaces. We give an information-theoretic proof that in
the original PAC framework, in which a weak learning algorithm is provided as an
oracle that is called by the booster, boosting cannot be parallelized: the ability to
call the weak learner multiple times in parallel within a single boosting stage does
not reduce the overall number of successive stages of boosting that are required.
1
Introduction
In this paper we consider large-margin halfspace learning in the PAC model: there is a target halfspace f (x) = sign(w ? x), where w is an unknown unit vector, and an unknown probability distribution D over the unit ball Bn = {x ? Rn : kxk2 ? 1} which has support on {x ? Bn : |w?x| ? ?}.
(Throughout this paper we refer to such a combination of target halfspace f and distribution D
as a ?-margin halfspace.) The learning algorithm is given access to labeled examples (x, f (x))
where each x is independently drawn from D, and it must with high probability output a hypothesis
h : Rn ? {?1, 1} that satisfies Prx?D [h(x) 6= f (x)] ? ?. Learning a large-margin halfspace is
a fundamental problem in machine learning; indeed, one of the most famous algorithms in machine
learning is the Perceptron algorithm [25] for this problem. PAC algorithms based on the Perceptron [17] run in poly(n, ?1 , 1? ) time, use O( ??12 ) labeled examples in Rn , and learn an unknown
n-dimensional ?-margin halfspace to accuracy 1 ? ?.
A motivating question: achieving Perceptron?s performance in parallel? The last few years have
witnessed a resurgence of interest in highly efficient parallel algorithms for a wide range of computational problems in many areas including machine learning [33, 32]. So a natural goal is to develop
an efficient parallel algorithm for learning ?-margin halfspaces that matches the performance of the
Perceptron algorithm. A well-established theoretical notion of efficient parallel computation is that
an efficient parallel algorithm for a problem with input size N is one that uses poly(N ) processors
and runs in parallel time polylog(N ), see e.g. [12]. Since the input to the Perceptron algorithm is a
sample of poly( 1? , ?1 ) labeled examples in Rn , we naturally arrive at the following:
1
Algorithm
naive parallelization of Perceptron
naive parallelization of [27]
polynomial-time linear programming [2]
This paper
Number of processors
poly(n, 1/?)
poly(n, 1/?)
1
poly(n, 1/?)
Running time
?
O(1/? 2 ) + O(log n)
2
?
O(1/?
) + O(log n)
poly(n, log(1/?))
?
O(1/?)
+ O(log n)
Table 1: Bounds on various parallel algorithms for learning a ?-margin halfspace over Rn .
Main Question: Is there a learning algorithm that uses poly(n, ?1 , 1? ) processors
and runs in time poly(log n, log ?1 , log 1? ) to learn an unknown n-dimensional ?margin halfspace to accuracy 1 ? ??
(See [31] for a detailed definition of parallel learning algorithms; here we only recall that an efficient parallel learning algorithm?s hypothesis must be efficiently evaluatable in parallel.) As Freund
[10] has largely settled how the resources required by parallel algorithms scale with the accuracy
parameter (see Lemma 6 below), our focus in this paper is on ? and n, leading to the following:
Main Question (simplified): Is there a learning algorithm that uses poly(n, ?1 )
processors and runs in time poly(log n, log ?1 ) to learn an unknown n-dimensional
?-margin halfspace to accuracy 9/10?
This question, which we view as a fundamental open problem, inspired the research reported here.
Prior results. Table 1 summarizes the running time and number of processors used by various parallel algorithms to learn a ?-margin halfspace over Rn . The naive parallelization of Perceptron in
the first line of the table is an algorithm that runs for O(1/? 2 ) stages; in each stage it processes
all of the O(1/? 2 ) examples simultaneously in parallel, identifies one that causes the Perceptron
algorithm to update its hypothesis vector, and performs this update. We do not see how to obtain
parallel time bounds better than O(1/? 2 ) from recent analyses of other algorithms based of gradient descent (such as [7, 8, 4]), some of which use assumptions incomparable in strength to the
?-margin condition studied here. The second line of the table corresponds to a similar naive parallelization of the boosting-based algorithm of [27] that achieves Perceptron-like performance for
learning a ?-margin halfspace. It boosts for O(1/? 2 ) stages over a O(1/? 2 )-size sample; using one
2
?
processor for each coordinate of each example, the running time bound is O(1/?
) ? log n, using
poly(n, 1/?) processors. (For both this algorithm and the Perceptron the time bound can be im2
?
proved to O(1/?
) + O(log n) as claimed in the table by using an initial random projection step;
we explain how to do this in Section 2.) The third line of the table, included for comparison, is
simply a standard sequential algorithm for learning a halfspace based on polynomial-time linear
programming executed on one processor, see e.g. [2, 14].
Efficient parallel algorithms have been developed for some simpler PAC learning problems such
as learning conjunctions, disjunctions, and symmetric Boolean functions [31]. [6] gave efficient
parallel PAC learning algorithms for some geometric constant-dimensional concept classes.
In terms of negative results for parallel learning, [31] shows that (under a complexity-theoretic
assumption) there is no parallel algorithm using poly(n) processors and polylog(n) time that
constructs a halfspace hypothesis that is consistent with a given linearly separable data set of ndimensional labeled examples. This does not give a negative answer to the Main Question for several
reasons: the Main Question allows any hypothesis representation (that can be efficiently evaluated in
parallel), allows the number of processors to grow inverse polynomially with the margin parameter
?, and allows the final hypothesis to err on up to (say) 5% of the points in the data set.
Our results. Our main positive result is a parallel algorithm that uses poly(n, 1/?) processors
?
to learn ?-margin halfspaces in parallel time O(1/?)
+ O(log n) (see Table 1). We believe ours
is the first algorithm that runs in time polylogarithmic in n and subquadratic in 1/?. Our analysis can be modified to establish similar positive results for other formulations of the large-margin
learning problem, including ones (see [28]) that have been tied closely to weak learnability (these
modifications are not presented due to space constraints). In contrast, our main negative result is an
2
information-theoretic argument that suggests that such positive parallel learning results cannot be
obtained by boosting alone. We show that if the weak learner must be called as an oracle, boosting
cannot be parallelized: any parallel booster must perform ?(1/? 2 ) sequential stages of boosting a
?black-box? ?-advantage weak learner in the worst case. This extends an earlier lower bound of
Freund [10] for standard (sequential) boosters that can only call the weak learner once per stage.
2
A parallel algorithm for learning ?-margin halfspaces over Bn
Our parallel algorithm is an amalgamation of existing tools from high-dimensional geometry, convex optimization, parallel algorithms for linear algebra, and learning theory. Roughly speaking the
2
?
algorithm works as follows: given a data set of m = O(1/?
) labeled examples from Bn ? {?1, 1},
2
?
it begins by randomly projecting the examples down to d = O(1/?
) dimensions. This essentially
preserves the geometry so the resulting d-dimensional labeled examples are still linearly separable
with margin ?(?). The algorithm then uses a variant of a linear programming algorithm of Renegar
[24, 21] which,
? roughly speaking, solves linear programs with m constraints to high accuracy using
(essentially) m stages of Newton?s method. Within Renegar?s algorithm we employ fast parallel
algorithms for linear algebra [22] to carry out each stage of Newton?s method in polylog(1/?) parallel time steps. This suffices to learn the unknown halfspace to high constant accuracy (say 9/10);
to get a 1 ? ?-accurate hypothesis we combine the above procedure with Freund?s approach [10]
for boosting accuracy that was mentioned in the introduction. The above sketch omits many details,
including crucial issues of precision in solving the linear programs to adequate accuracy. In the rest
of this section we address the necessary details in full and prove the following theorem:
Theorem 1 There is a parallel algorithm with the following performance guarantee: Let f, D define
an unknown ?-margin halfspace over Bn as described in the introduction. The algorithm is given as
input , ? > 0 and access to labeled examples (x, f (x)) that are drawn independently from D. It runs
in O(((1/?)polylog(1/?)+log(n)) log(1/)+log log(1/?)) time, uses poly(n, 1/?, 1/, log(1/?))
processors, and with probability 1?? it outputs a hypothesis h satisfying Prx?D [h(x) 6= f (x)] ? ?.
We assume that the value of ? is ?known? to the algorithm, since otherwise the algorithm can use a
standard ?guess and check? approach trying ? = 1, 1/2, 1/4, etc., until it finds a value that works.
We first describe the tools from the literature that are used in the algorithm.
Random projection. We say that a random projection matrix is a matrix A chosen uniformly from
n
{?1, 1}n?d . Given such an A and a unit
? vector w ? R (recall that the target halfspace f is
0
f (x) = sign(w ? x)), let w denote (1/ d)wA. After transformation by A the distribution D over
Bn is transformed to a distribution D0 over Rd?in the natural way: a draw x0 from D0 is obtained by
making a draw x from D and setting x0 = (1/ d)xA. We will use the following lemma from [1]:
Lemma 1 [1] Let f (x) = sign(w ? x) and D define a ?-margin halfspace as described in the introduction. For d = O((1/? 2 ) log(1/?)), a random n ? d projection
matrix Ah will with probability 99/100 induce
D0 and w0 as described above such that
i
0
w
Prx0 ?D0 kw0 k ? x0 < ?/2 or kx0 k2 > 2 ? ? 4 .
Convex optimization. We recall some tools we will use from convex optimization over Rd [24, 3].
Pd
1
Let F be the convex barrier function F (u) = i=1 log (ui ?ai )(b
(we specify the values
i ?ui )
1
1
ai < bi below). Let g(u) be the gradient of F at u; note that g(u)i = bi ?u
? ui ?a
. Let H(u)
i
i
p
?1
T
be the Hessian of F at u, let ||v||u = v H(u)v, and let n(u) = ?H(u) g(u) be the Newton
step at u. For a linear subspace L of Rd , let F|L be the restriction of F to L, i.e. the function that
evaluates to F on L and ? everywhere else.
We will apply interior point methods to approximately solve problems of the following form, where
a1 , ..., ad , b1 , ..., bd ? [?2, 2], |bi ? ai | ? 2 for all i, and L is a subspace of Rd :
minimize ? u1 such that u ? L and ai ? xi ? bi for all i.
d
Let z ? R be the minimizer, and let opt be the optimal value of (1).
3
(1)
def
The algorithm we analyze minimizes F? (u) = ??u1 + F|L (u) for successively larger values of ?.
Let z(?) be the minimizer of F? , let opt? = F? (z(?)), and let n? (u) be its Newton step. (To keep
the notation clean, the dependence on L is suppressed from the notation.)
As in [23], we periodically round intermediate solutions to keep the bit complexity under control.
The analysis of such rounding in [23] requires a problem transformation which does not preserve
the large-margin condition that we need for our analysis, so we give a new analysis, using tools
from [24], and a simpler algorithm. It is easier to analyze the effect of the rounding on the quality
of the solution than on the progress measure used in [24]. Fortunately, [3] describes an algorithm
that can go from an approximately optimal solution to a solution with a good measure of progress
while controlling the bit complexity of the output. The algorithm repeatedly finds the direction of
the Newton step, and then performs a line search to find the approximately optimal step size.
Lemma 2 ([3, Section 9.6.4]) There is an algorithm Abt with the following property. Suppose for
any ? > 0, Abt is given u with rational components such that F? (u) ? opt? ? 2. Then after
constantly many iterations of Newton?s method and back-tracking line search, Abt returns an u+ that
(i) satisfies ||n? (u+ )||u+ ? 1/9; and (ii) has rational components that have bit-length bounded by a
polynomial in d, the bit length of u, and the bit length of the matrix A such that L = {v : Av = 0}.1
We analyze the following variant of the usual central path algorithm for linear programming, which
we call Acpr . It takes as input a precision parameter ? and outputs the final u(k) .
? Set ?1 = 1, ? = 1 +
?1
8 2d
and =
?
?1
.
d2 d(5d/?+ d210d/?+2d+1 )e
? Given u as input, run Abt starting with u to obtain u(1) such that ||n?1 (u(1) )||u(1) ? 1/9.
? For k from 2 to 1 + d log(4d/?)
log(?) e perform the following steps (i)?(iv): (i) set ?k = ??k?1 ;
(ii) set w(k) = u(k?1) + n?k (u(k?1) ) (i.e. do one step of Newton?s method); (iii) form r(k)
by rounding each component of w(k) to the nearest multiple of , and then projecting back
onto L; (iv) Run Abt starting with r(k) to obtain u(k) such that ||n?k (u(k) )||u(k) ? 1/9.
The following lemma, implicit2 in [3, 24], bounds the quality of the solutions in terms of the progress
measure ||n?k (u)||u .
Lemma 3 If u ? L and ||n?k (u)||u ? 1/9, then F?k (u)?opt?k ? ||n?k (u)||2u and ?u1 ?opt ?
4d
?k .
The following key lemma shows that rounding intermediate solutions does not do too much harm:
Lemma 4 For any k, if F?k (w(k) ) ? opt?k + 1/9, then F?k (r(k) ) ? opt?k + 1.
Proof: Fix k, and note that ?k = ? k?1 ? 5d/?. We henceforth drop k from all notation.
First, we claim that
? = min{|ai ? wi |, |bi ? wi |} ? 2?2??2d?1/9 .
i
(2)
Let m = ((a1 + b1 )/2, ..., (ad + bd )/2). Since F? (w) ? opt? + 1/9, we have F? (w) ? F? (m) +
1/9 ? ? + 1/9. But minimizing each term of F? separately, we get F? (w) ? log ?1 ? 2d ? ?.
Combining this with the previous inequality and solving for ? yields (2).
?
Since ||w ? r|| ? d, recalling that ? 2?d(5d/?+?1d210d/?+2d+1 ) , we have
?
min{|ai ? ri |, |bi ? ri |} ? 2?2??2d?1/9 ? d ? 2?2??2d?1 .
(3)
i
1
We note for the reader?s convenience that ?(u) in [3] is the same as our ||n(u+ )||u+ . The analysis on pages
503-505 of [3] shows that a constant number of iterations suffice. Each step is a projection of H(u)?1 g(u)
onto L, which can be seen to have bit-length bounded by a polynomial in the bit-length of u. Composing
polynomials constantly many times yields a polynomial, which gives the claimed bit-length bound for u+ .
2
The first inequality is (9.50) from [3]. The last line of p. 46 of [24] proves that ||n?k (u)||u ? 1/9 implies
||u ? z(?)||z(?) ? 1/5 from which the second inequality follows by (2.14) of [24], using the fact that ? = 2d
(proved on page 35 of [24]).
4
r?w
Now, define ? : R ? R by ?(t) = F? w + t ||r?w||
. We have
Z
F? (r) ? F? (w) = ?(||r ? w||) ? ?(0) =
||r?w||
? 0 (t)dt ? ||r ? w|| max |? 0 (t)|.
0
t
(4)
Let S be the line segment between w and r. Since for each t ? [0, ||r ? w||] the value ? 0 (t) is a
directional derivative of F? at some point of S, (4) implies that, for the gradient g? of F? ,
F? (r) ? F? (w) ? ||w ? r|| max{||g? (s)|| : s ? S}.
(5)
However (3) and (2) imply that min{|ai ? si |, |bi ? si |} ??2?2??2d?1 for all s ? S. Recalling that
1
1
g(u)i = bi ?u
? ui ?a
, this means that ||g? (s)|| ? ? + d22?+2d+1 so that applying (5) we get
i
i
?
? 2?+2d+1
d2
).
Since
||w
?
r||
?
d, we have F? (r) ? F? (w) ?
F?
+
? (r) ? F?
? (w) ? ||w ? r||(?
?
? 10d/?+2d+1
2?+2d+1
d(? + d2
) ? d(5d/? + d2
) ? 1/2, and the lemma follows.
Fast parallel linear algebra: inverting matrices. We will use an algorithm due to Reif [22]:
Lemma 5 ([22]) There is a polylog(d, L)-time, poly(d, L)-processor parallel algorithm which,
given as input a d ? d matrix A with rational entries of total bit-length L, outputs A?1 .
Learning theory: boosting accuracy. The following is implicit in the analysis of Freund [10].
Lemma 6 ([10]) Let D be a distribution over (unlabeled) examples. Let A be a parallel learning
algorithm such that for all D0 with support(D0 ) ? support(D), given draws (x, f (x)) from D0 , with
probability 9/10 A outputs a hypothesis with accuracy 9/10 (w.r.t. D0 ) using P processors in T
time. Then there is a parallel algorithm B that with probability 1 ? ? constructs a (1 ? ?)-accurate
hypothesis (w.r.t. D) in O(T log(1/)+log log(1/?)) time using poly(P, 1/, log(1/?)) processors.
2.1
Proof of Theorem 1
As described at the start of this section, due to Lemma 6, it suffices to prove the lemma in the case
that = 1/10 and ? = 1/10. We assume w.l.o.g. that ? = 1/integer.
The algorithm first selects an n ? d random projection matrix A where d = O(log(1/?)/? 2 ).
This defines a transformation ?A : Bn ? Rd as follows: given x ? Bn , thepvector ?A (x) ?
Rd is obtained by?(i) rounding each xi to the nearest integer multiple of 1/(4d n/?e); then (ii)
setting x0 = (1/2 d)xA; and finally (iii) rounding each x0i to the nearest multiple of 1/(8dd/?e).
Given x it is easy to compute ?A (x) using O(n log(1/?)/? 2 ) processors in O(log(n/?)) time. Let
D0 denote the distribution over Rd obtained by applying ?A to D. Across all coordinates D0 is
supported on rational numbers with the same poly(1/?) common?denominator. By Lemma 1, with
probability 99/100 over A, the target-distribution pair (w0 = (1/ d)wA, D0 ) satisfies
h
i
0
0
0
0 def
0
Pr
|x
?
(w
/kw
k)|
<
?
=
?/8
or
kx
k
>
1
? ?4.
(6)
2
0
0
x ?D
The algorithm next draws m = c log(1/?)/? 2 labeled training examples (?A (x), f (x)) from
D0 ; this can be done in O(log(n/?)) time using O(n) ? poly(1/?) processors as noted above.
It then applies Acpr to find a d-dimensional halfspace h that classifies all m examples correctly
(more on this below). By (6), with probability at least (say) 29/30 over the random draw of
(?A (x1 ), ym ), ..., (?A (xm ), ym ), we have that yt (w0 ? ?A (xt )) ? ? and ||?A (xt )|| ? 1 for all
t = 1, . . . , m. Now the standard VC bound for halfspaces [30] applied to h and D0 implies that
since h classifies all m examples correctly, with overall probability at least 9/10 its accuracy is at
least 9/10 with respect to D0 , i.e. Prx?D [h(?A (x)) 6= f (x)] ? 1/10. So the hypothesis h ? ?A has
accuracy 9/10 with respect to D with probability 9/10 as required by Lemma 6.
It remains to justify the above claim about Acpr classifying all examples correctly, and analyze the
running time. More precisely we show that given m = O(log(1/?)/? 2 ) training examples in Bd
with rational components that all have coordinates with a common denominator that is poly(1/?)
and are separable with a margin ? 0 = ?/8, Acpr can be used to construct a d-dimensional halfspace
?
that classifies them all correctly in O(1/?)
parallel time using poly(1/?) processors.
5
Given (x01 , y1 ), ..., (x0m , ym ) ? Bd ? {?1, 1} satisfying the above conditions, we will apply algorithm Acpr to the following linear program, called LP, with ? = ? 0 /2: ?minimize ?s such that
yt (v ? x0t ) ? st = s and 0 ? st ? 2 for all t ? [m]; ?1 ? vi ? 1 for all i ? [d]; and ?2 ? s ? 2.?
Intuitively, s is the minimum margin over all examples, and st is the difference between each example?s margin and s. The subspace L is defined by the equality constraints yt (v ? x0t ) ? st = s.
Our analysis will conclude by applying the following lemma, with an initial solution of s = ?1,
v = 0, and st = 1 for all t. (Note that u1 corresponds to s.)
Lemma 7 Given any d-dimensional linear program in the form (1), and an initial solution u ? L
such that min{|ui ? ai |, |ui ?
?bi |} ? 1 for all i, Algorithm Acpr approximates the optimal solution
to an additive ??. It runs in d ? polylog(d/?) parallel time and uses poly(1/?, d) processors.
The LP constraints enforce that all examples are classified correctly with a margin of at least s. The
feasible solution in which v is w0 /||w0 ||, s equals ? 0 and st = yt (v ? x0t ) ? s shows that the optimum
solution of LP has value at most ?? 0 . So approximating the optimum to an additive ?? = ?? 0 /2
ensures that all examples are classified correctly, and it is enough to prove Lemma 7.
Proof of Lemma 7: First, we claim that, for all k, ||n?k (u(k) )||u(k) ? 1/9; given this, since the
final value of ?k is at least 4d/?, Lemma 3 implies that the solution is ?-close to optimal. We induct
on k. For k = 1, since initially mini {|ui ? ai |, |ui ? bi |} ? 1, we have F (u) ? 0, and, since ?1 = 1
and u1 ? ?1 we have F?1 (u) ? 1 and opt?1 ? ?1. So we can apply Lemma 2 to get the base case.
Now, for the induction step, suppose ||n?k (u(k) )||u(k) ? 1/9. It then follows3 from [24, page 46]
that ||n?k+1 (w(k+1) )||w(k+1) ? 1/9. Next, Lemmas 3 and 4 imply that F?k+1 (r(k+1) ) ? opt?k+1 ?
1. Then Lemma 2 gives ||n?k+1 (u(k+1) )||u(k+1) ? 1/9 as required.
Next, we claim that the bit-length of all intermediate solutions is at most poly(d, 1/?). This holds for
r(k) , and follows for u(k) and w(k) because each of them is obtained from some r(k) by performing
a constant number of operations each of which blows up the bit length at most polynomially (see
Lemma 2). Since each intermediate solution has polynomial bit length, the matrix inverses can be
computed in polylog(d, 1/?) time using poly(d, 1/?)
? processors, by Lemma 5. The time bound
then follows from the fact that there are at most O( d log(d/?)) iterations.
3
Lower bound for parallel boosting in the oracle model
Boosting is a widely used method for learning large-margin halfspaces. In this section we consider
the question of whether boosting algorithms can be efficiently parallelized. We work in the original
PAC learning setting [29, 16, 26] in which a weak learning algorithm is provided as an oracle that is
called by the boosting algorithm, which must simulate a distribution over labeled examples for the
weak learner. Our main result for this setting is that boosting is inherently sequential; being able to
to call the weak learner multiple times in parallel within a single boosting stage does not reduce the
overall number of sequential boosting stages that are required. In fact we show this in a very strong
sense, by proving that a boosting algorithm that runs arbitrarily many copies of the weak learner in
parallel in each stage cannot save even one stage over a sequential booster that runs the weak learner
just once in each stage. This lower bound is unconditional and information-theoretic.
Below we first define the parallel boosting framework and give some examples of parallel boosters.
We then state and prove our lower bound on the number of stages required by parallel boosters. A
consequence of our lower bound is that ?(log(1/?)/? 2 ) stages of parallel boosting are required in
order to boost a ?-advantage weak learner to achieve classification accuracy 1 ? ? no matter how
many copies of the weak learner are used in parallel in each stage.
Our definition of weak learning is standard in PAC learning, except that for our discussion it suffices
to consider a single target function f : X ? {?1, 1} over a domain X.
Definition 1 A ?-advantage weak learner L is an algorithm that is given access to a source of independent random labeled examples drawn from an (unknown and arbitrary) probability distribution
3
Noting that ? ? 2d [24, page 35].
6
P over labeled examples {(x, f (x))}x?X . L must4 return a weak hypothesis h : X ? {?1, 1} that
satisfies Pr(x,f (x))?P [h(x) = f (x)] ? 1/2 + ?. Such an h is said to have advantage ? w.r.t. P.
We fix P to henceforth denote the initial distribution over labeled examples, i.e. P is a distribution
over {(x, f (x))}x?X where the marginal distribution PX may be an arbitrary distribution over X.
Intuitively, a boosting algorithm runs the weak learner repeatedly on a sequence of carefully chosen
distributions to obtain a sequence of weak hypotheses, and combines the weak hypotheses to obtain a
final hypothesis that has high accuracy under P. We give a precise definition below, but first we give
some intuition to motivate our definition. In stage t of a parallel booster the boosting algorithm may
run the weak learner many times in parallel using different probability distributions. The probability
weight of a labeled example (x, f (x)) under a distribution constructed at the t-th stage of boosting
may depend on the values of all the weak hypotheses from previous stages and on the value of
f (x), but may not depend on any of the weak hypotheses generated by any of the calls to the weak
learner in stage t. No other dependence on x is allowed, since intuitively the only interface that
the boosting algorithm should have with each data point is through its label and the values of the
weak hypotheses from earlier stages. We further observe that since the distribution P is the only
source of labeled examples, a booster should construct the distributions at each stage by somehow
?filtering? examples (x, f (x)) drawn from P based only on the value of f (x) and the values of the
weak hypotheses from previous stages. We thus define a parallel booster as follows:
Definition 2 (Parallel booster) A T -stage parallel boosting algorithm with N -fold parallelism is
defined by T N functions {?t,k }t?[T ],k?[N ] and a (randomized) Boolean function h, where ?t,k :
{?1, 1}(t?1)N +1 ? [0, 1] and h : {?1, 1}T N ? {?1, 1}. In the t-th stage of boosting the weak
learner is run N times in parallel. For each k ? [N ], the distribution Pt,k over labeled examples
that is given to the k-th run of the weak learner is as follows: a draw from Pt,k is made by drawing
(x, f (x)) from P and accepting (x, f (x)) as the output of the draw from Pt,k with probability
px = ?t,k (h1,1 (x), . . . , ht?1,N (x), f (x)) (and rejecting it and trying again otherwise). In stage t,
for each k ? [N ] the booster gives the weak learner access to Pt,k as defined above and the weak
learner generates a hypothesis ht,k that has advantage at least ? w.r.t. Pt,k .
After T stages, T N weak hypotheses {ht,k }t?[T ],k?[N ] have been obtained from the weak learner.
The final hypothesis of the booster is H(x) := h(h1,1 (x), . . . , hT,N (x)), and its accuracy is
minht,k Pr(x,f (x))?P [H(x) = f (x)], where the min is taken over all sequences of T N weak hypotheses subject to the condition that each ht,k has advantage at least ? w.r.t. Pt,k .
The parameter N above corresponds to the number of processors that the parallel booster is using;
we get a sequential booster when N = 1. Many of the most common PAC-model boosters in the
literature are sequential boosters, such as [26, 10, 9, 11, 27, 5] and others. In [10] Freund gave a
boosting algorithm and showed that after T stages of boosting, his algorithm generates a final hy
j
def PbT /2c
T ?j
pothesis that is guaranteed to have error at most vote(?, T ) = j=0 Tj 21 + ? (1/2 ? ?)
(see Theorem 2.1 of [10]). Freund also gave a matching lower bound by showing (see his Theorem 2.4) that any T -stage sequential booster must have error at least as large as vote(?, T ), and so
consequently any sequential booster that generates a (1 ? ?)-accurate final hypothesis must run for
?(log(1/?)/? 2 ) stages. Our Theorem 2 below extends this lower bound to parallel boosters.
Several parallel boosting algorithms have been given in the literature, including branching program [20, 13, 18, 19] and decision tree [15] boosters. All of these boosters take ?(log(1/?)/? 2 )
stages to learn to accuracy 1 ? ?; our theorem below implies that any parallel booster must run for
?(log(1/?)/? 2 ) stages no matter how many parallel calls to the weak learner are made per stage.
Theorem 2 Let B be any T -stage parallel boosting algorithm with N -fold parallelism. Then for
any 0 < ? < 1/2, when B is used to boost a ?-advantage weak learner the resulting final hypothesis
may have error as large as vote(?, T ) (see the discussion after Definition 2).
We emphasize that Theorem 2 holds for any ? and any N that may depend on ? in an arbitrary way.
4
The usual definition of a weak learner would allow L to fail with probability ?. This probability can be made
exponentially small by running L multiple times so for simplicity we assume there is no failure probability.
7
The theorem is proved as follows: fix any 0 < ? < 1/2 and fix B to be any T -stage parallel
boosting algorithm. We will exhibit a target function f and a distribution P over {(x, f (x))x?X ,
and describe a strategy that a weak learner W can use to generate weak hypotheses ht,k that each
have advantage at least ? with respect to the distributions Pt,k . We show that with this weak learner
W , the resulting final hypothesis H that B outputs will have accuracy at most 1 ? vote(?, T ).
We begin by describing the desired f and P. The domain X of f is X = Z ? ?, where Z =
{?1, 1}n and ? is the set of all ? = (?1 , ?2 , . . . ) where each ?i belongs to {?1, 1}. The target
function f is simply f (z, ?) = z. The distribution P = (P X , P Y ) over {(x, f (x))}x?X is defined
as follows. A draw from P is obtained by drawing x = (z, ?) from P X and returning (x, f (x)). A
draw of x = (z, ?) from P X is obtained by first choosing a uniform random value in {?1, 1} for z,
and then choosing ?i ? {?1, 1} to equal z with probability 1/2 + ? independently for each i. Note
that under P, given the label z = f (x) of a labeled example (x, f (x)), each coordinate ?i of x is
correct in predicting the value of f (x, z) with probability 1/2 + ? independently of all other ?j ?s.
We next describe a way that a weak learner W can generate a ?-advantage weak hypothesis each
time it is invoked by B. Fix any t ? [T ] and any k ? [N ]. When W is invoked with Pt,k it replies as
follows (recall that for x ? X we have x = (z, ?) as described above): (i) if Pr(x,f (x))?Pt,k [?t =
f (x)] ? 1/2 + ? then the weak hypothesis ht,k (x) is the function ??t ,? i.e. the (t + 1)-st coordinate
of x. Otherwise, (ii) the weak hypothesis ht,k (x) is ?z,? i.e. the first coordinate of x. (Note that
since f (x) = z for all x, this weak hypothesis has zero error under any distribution.)
It is clear that each weak hypothesis ht,k generated as described above indeed has advantage at least
? w.r.t. Pt,k , so the above is a legitimate strategy for W . The following lemma will play a key role:
Lemma 8 If W never uses option (ii) then Pr(x,f (x))?P [H(x) 6= f (x)] ? vote(?, T ).
Proof: If the weak learner never uses option (ii) then H depends only on variables ?1 , . . . , ?T and
hence is a (randomized) Boolean function over these variables. Recall that for (x = (z, ?), f (x) =
z) drawn from P, each coordinate ?1 , . . . , ?T independently equals z with probability 1/2 + ?.
Hence the optimal (randomized) Boolean function H over inputs ?1 , . . . , ?T that maximizes the
accuracy Pr(x,f (x))?P [H(x) = f (x)] is the (deterministic) function H(x) = Maj(?1 , . . . , ?T ) that
outputs the majority vote of its input bits. (This can be easily verified using Bayes? rule in the usual
?Naive Bayes? calculation.) The error rate of this H is precisely the probability that at most bT /2c
?heads? are obtained in T independent (1/2 + ?)-biased coin tosses, which equals vote(?, T ).
Thus it suffices to prove the following lemma, which we prove by induction on t:
Lemma 9 W never uses option (ii) (i.e. Pr(x,f (x))?Pt,k [?t = f (x)] ? 1/2 + ? always).
Proof: Base case (t = 1). For any k ? [N ], since t = 1 there are no weak hypotheses from
previous stages, so the value of px is determined by the bit f (x) = z (see Definition 2). Hence P1,k
is a convex combination of two distributions which we call D1 and D?1 . For b ? {?1, 1}, a draw
of (x = (z, ?); f (x) = z) from Db is obtained by setting z = b and independently setting each
coordinate ?i equal to z with probability 1/2 + ?. Thus in the convex combination P1,k of D1 and
D?1 , we also have that ?1 equals z (i.e. f (x)) with probability 1/2 + ?. So the base case is done.
Inductive step (t > 1). Fix any k ? [N ]. The inductive hypothesis and the weak learner?s strategy
together imply that for each labeled example (x = (z, ?), f (x) = z), since hs,` (x) = ?s for
s < t, the rejection sampling parameter px = ?t,k (h1,1 (x), . . . , ht?1,N (x), f (x)) is determined
by ?1 , . . . , ?t?1 and z and does not depend on ?t , ?t+1 , .... Consequently the distribution Pt,k
over labeled examples is some convex combination of 2t distributions which we denote Db , where b
ranges over {?1, 1}t corresponding to conditioning on all possible values for ?1 , . . . , ?t?1 , z. For
each b = (b1 , . . . , bt ) ? {?1, 1}t , a draw of (x = (z, ?); f (x) = z) from Db is obtained by setting
z = bt , setting (?1 , . . . , ?t?1 ) = (b1 , . . . , bt?1 ), and independently setting each other coordinate
?j (j ? t) equal to z with probability 1/2 + ?. In particular, because ?t is conditionally independent
of ?1 , ..., ?t?1 given z, Pr(?t = z|?1 = b1 , ..., ?t?1 = bt?1 ) = Pr(?t = z) = 1/2 + ?. Thus
in the convex combination Pt,k of the different Db ?s, we also have that ?t equals z (i.e. f (x)) with
probability 1/2 + ?. This concludes the proof of the lemma and the proof of Theorem 2.
8
References
[1] R. Arriaga and S. Vempala. An algorithmic theory of learning: Robust concepts and random projection.
In Proc. 40th FOCS, pages 616?623, 1999.
[2] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. Warmuth. Learnability and the Vapnik-Chervonenkis
dimension. Journal of the ACM, 36(4):929?965, 1989.
[3] S. P. Boyd and L. Vandenberghe. Convex Optimization. Cambridge, 2004.
[4] J. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for l1-regularized loss
minimization. ICML, 2011.
[5] Joseph K. Bradley and Robert E. Schapire. Filterboost: Regression and classification on large datasets.
In NIPS, 2007.
[6] N. Bshouty, S. Goldman, and H.D. Mathias. Noise-tolerant parallel learning of geometric concepts. Inf.
and Comput., 147(1):89 ? 110, 1998.
[7] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, adaboost and bregman
distances. Machine Learning, 48(1-3):253?285, 2002.
[8] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction. ICML,
2011.
[9] C. Domingo and O. Watanabe. MadaBoost: a modified version of AdaBoost. In Proc. 13th COLT, pages
180?189, 2000.
[10] Y. Freund. Boosting a weak learning algorithm by majority. Inf. and Comput., 121(2):256?285, 1995.
[11] Y. Freund. An adaptive version of the boost-by-majority algorithm. Mach. Learn., 43(3):293?318, 2001.
[12] R. Greenlaw, H.J. Hoover, and W.L. Ruzzo. Limits to Parallel Computation: P-Completeness Theory.
Oxford University Press, New York, 1995.
[13] A. Kalai and R. Servedio. Boosting in the presence of noise. Journal of Computer & System Sciences,
71(3):266?290, 2005.
[14] N. Karmarkar. A new polynomial time algorithm for linear programming. Combinat., 4:373?395, 1984.
[15] M. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms. In
Proceedings of the Twenty-Eighth Annual Symposium on Theory of Computing, pages 459?468, 1996.
[16] M. Kearns and U. Vazirani. An Introduction to Computational Learning Theory. MIT Press, Cambridge,
MA, 1994.
[17] N. Littlestone. From online to batch learning. In Proc. 2nd COLT, pages 269?284, 1989.
[18] P. Long and R. Servedio. Martingale boosting. In Proc. 18th Annual COLT, pages 79?94, 2005.
[19] P. Long and R. Servedio. Adaptive martingale boosting. In Proc. 22nd NIPS, pages 977?984, 2008.
[20] Y. Mansour and D. McAllester. Boosting using branching programs. Journal of Computer & System
Sciences, 64(1):103?112, 2002.
[21] Y. Nesterov and A. Nemirovskii. Interior Point Polynomial Methods in Convex Programming: Theory
and Applications. Society for Industrial and Applied Mathematics, Philadelphia, 1994.
[22] John H. Reif. O(log2 n) time efficient parallel factorization of dense, sparse separable, and banded matrices. SPAA, 1994.
[23] J. Renegar. A polynomial-time algorithm, based on Newton?s method, for linear programming. Mathematical Programming, 40:59?93, 1988.
[24] James Renegar. A mathematical view of interior-point methods in convex optimization. Society for
Industrial and Applied Mathematics, 2001.
[25] F. Rosenblatt. The Perceptron: a probabilistic model for information storage and organization in the brain.
Psychological Review, 65:386?407, 1958.
[26] R. Schapire. The strength of weak learnability. Machine Learning, 5(2):197?227, 1990.
[27] R. Servedio. Smooth boosting and learning with malicious noise. JMLR, 4:633?648, 2003.
[28] S. Shalev-Shwartz and Y. Singer. On the equivalence of weak learnability and linear separability: New
relaxations and efficient boosting algorithms. Machine Learning, 80(2):141?163, 2010.
[29] L. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142, 1984.
[30] V. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998.
[31] J. S. Vitter and J. Lin. Learning in parallel. Inf. Comput., 96(2):179?202, 1992.
[32] DIMACS 2011 Workshop. Parallelism: A 2020 Vision. 2011.
[33] NIPS 2009 Workshop. Large-Scale Machine Learning: Parallelism and Massive Datasets. 2009.
9
| 4444 |@word h:1 version:2 polynomial:10 nd:2 dekel:1 open:1 d2:4 bn:8 carry:1 initial:4 chervonenkis:1 ours:1 minht:1 existing:1 err:1 kx0:1 com:1 bradley:2 si:2 pothesis:1 must:8 bd:4 john:1 periodically:1 additive:2 drop:1 update:2 bickson:1 alone:1 guess:1 warmuth:1 accepting:1 completeness:1 boosting:39 successive:1 simpler:2 mathematical:2 constructed:1 symposium:1 focs:1 prove:6 combine:2 vitter:1 interscience:1 x0:4 indeed:2 hardness:1 roughly:2 p1:2 brain:1 inspired:1 goldman:1 provided:2 begin:2 notation:3 bounded:2 suffice:1 classifies:3 maximizes:1 minimizes:1 developed:1 transformation:3 guarantee:1 runtime:1 returning:1 k2:1 x0m:1 control:1 unit:3 positive:4 limit:1 consequence:1 mach:1 oxford:1 path:1 approximately:3 black:1 studied:1 equivalence:1 suggests:1 factorization:1 range:2 bi:10 procedure:1 filterboost:1 area:1 projection:7 matching:1 boyd:1 induce:1 get:5 cannot:4 interior:4 onto:2 convenience:1 unlabeled:1 close:1 context:1 applying:3 storage:1 restriction:1 deterministic:1 yt:4 go:1 starting:2 independently:7 convex:12 d22:1 bachrach:1 simplicity:1 madaboost:1 legitimate:1 rule:1 haussler:1 vandenberghe:1 his:2 proving:1 notion:1 coordinate:10 target:7 controlling:1 suppose:2 pt:13 play:1 programming:8 shamir:1 us:10 massive:1 hypothesis:34 domingo:1 satisfying:2 labeled:18 role:1 worst:1 ensures:1 halfspaces:6 mentioned:1 intuition:1 pd:1 complexity:3 ui:8 nesterov:1 motivate:1 depend:4 solving:2 segment:1 algebra:3 learner:27 easily:1 various:2 fast:3 describe:3 choosing:2 shalev:1 disjunction:1 larger:1 solve:1 widely:1 say:4 drawing:2 otherwise:3 ability:2 final:9 online:2 advantage:10 sequence:3 combining:1 achieve:1 optimum:2 polylog:7 develop:1 x0i:1 nearest:3 bshouty:1 progress:3 strong:1 solves:1 c:1 implies:5 direction:1 closely:1 correct:1 vc:1 mcallester:1 suffices:4 fix:6 hoover:1 opt:10 hold:2 algorithmic:1 claim:4 achieves:1 proc:5 label:2 tool:4 minimization:1 mit:1 always:1 modified:2 kalai:1 conjunction:1 focus:1 kyrola:1 check:1 contrast:2 industrial:2 sense:1 bt:5 initially:1 transformed:1 selects:1 overall:3 issue:1 classification:2 colt:3 marginal:1 equal:8 construct:4 once:2 never:3 sampling:1 kw:1 icml:2 subquadratic:1 others:1 few:1 employ:1 abt:5 randomly:1 simultaneously:1 preserve:2 geometry:2 recalling:2 organization:1 interest:1 highly:1 unconditional:1 tj:1 accurate:3 bregman:1 necessary:1 tree:2 iv:2 reif:2 littlestone:1 desired:1 theoretical:1 psychological:1 witnessed:1 earlier:2 boolean:4 entry:1 uniform:1 rounding:6 too:1 learnability:4 motivating:1 reported:1 answer:1 st:7 fundamental:3 randomized:3 probabilistic:1 michael:1 ym:3 together:1 again:1 central:1 settled:1 successively:1 henceforth:2 booster:22 derivative:1 leading:1 return:2 blow:1 matter:2 depends:2 ad:2 vi:1 view:2 h1:3 analyze:4 start:1 bayes:2 option:3 parallel:67 halfspace:23 minimize:2 accuracy:18 largely:1 efficiently:3 yield:2 directional:1 weak:50 famous:1 rejecting:1 processor:22 ah:1 classified:2 explain:1 banded:1 definition:9 failure:1 evaluates:1 servedio:5 james:1 naturally:1 proof:8 rational:5 proved:3 recall:5 carefully:1 back:2 dt:1 adaboost:2 specify:1 formulation:1 evaluated:1 box:1 done:2 xa:2 stage:37 implicit:1 just:1 until:1 reply:1 sketch:1 google:2 somehow:1 defines:1 logistic:1 quality:2 believe:1 effect:1 concept:3 inductive:2 equality:1 hence:3 symmetric:1 ehrenfeucht:1 deal:1 conditionally:1 round:1 branching:2 noted:1 dimacs:1 trying:2 theoretic:4 performs:2 l1:1 interface:1 invoked:2 common:3 x0t:3 conditioning:1 exponentially:1 approximates:1 im2:1 refer:1 cambridge:2 ai:9 rd:7 mathematics:2 access:4 etc:1 base:3 recent:1 showed:1 belongs:1 inf:3 claimed:2 inequality:3 arbitrarily:1 seen:1 minimum:1 fortunately:1 guestrin:1 parallelized:3 ii:7 multiple:6 full:1 d0:14 smooth:1 match:1 calculation:1 long:3 lin:1 a1:2 prediction:1 variant:2 regression:2 denominator:2 essentially:2 vision:1 iteration:3 gilad:1 separately:1 polylogarithmically:1 else:1 grow:1 source:2 malicious:1 crucial:1 parallelization:4 rest:1 biased:1 subject:1 db:4 call:7 integer:2 noting:1 presence:1 intermediate:4 iii:2 easy:1 enough:1 gave:3 reduce:2 incomparable:1 whether:1 speaking:2 cause:1 hessian:1 repeatedly:2 adequate:1 york:2 detailed:1 clear:1 generate:2 schapire:3 sign:3 per:2 correctly:6 rosenblatt:1 key:2 achieving:1 drawn:5 clean:1 verified:1 ht:10 relaxation:1 year:1 run:19 inverse:2 everywhere:1 arrive:1 throughout:1 extends:2 reader:1 draw:11 decision:2 summarizes:1 bit:14 bound:15 def:3 guaranteed:1 fold:2 oracle:4 annual:2 renegar:4 strength:2 constraint:4 precisely:2 ri:2 hy:1 generates:3 u1:5 simulate:1 argument:1 min:5 performing:1 separable:4 vempala:1 px:4 ball:1 combination:5 describes:1 across:1 suppressed:1 separability:1 wi:2 lp:3 joseph:1 modification:1 making:1 projecting:2 intuitively:3 pr:9 taken:1 resource:1 remains:1 kw0:1 describing:1 fail:1 singer:2 operation:1 apply:3 observe:1 enforce:1 save:1 batch:1 coin:1 original:2 top:1 running:5 log2:1 newton:8 yoram:1 prof:1 establish:1 approximating:1 society:2 question:7 strategy:3 rocco:2 dependence:3 usual:3 said:1 exhibit:1 gradient:3 subspace:3 distance:1 philip:1 majority:3 w0:5 reason:1 induction:2 length:10 mini:1 minimizing:1 executed:1 robert:2 negative:4 resurgence:1 unknown:10 perform:2 twenty:1 av:1 datasets:2 descent:2 communication:1 precise:1 head:1 y1:1 rn:6 mansour:2 nemirovskii:1 arbitrary:3 inverting:1 pair:1 required:7 omits:1 polylogarithmic:1 established:1 boost:4 nip:3 address:1 able:1 below:7 parallelism:4 xm:1 eighth:1 induct:1 program:6 including:4 max:2 ruzzo:1 natural:2 regularized:1 predicting:1 ndimensional:1 imply:3 identifies:1 maj:1 concludes:1 naive:6 columbia:2 philadelphia:1 prior:1 geometric:2 literature:3 review:1 freund:8 loss:1 filtering:1 x01:1 consistent:1 xiao:1 dd:1 classifying:1 supported:1 last:2 copy:2 allow:1 perceptron:11 wide:1 barrier:1 sparse:1 distributed:1 dimension:3 made:3 adaptive:2 simplified:1 polynomially:2 vazirani:1 emphasize:1 keep:2 tolerant:1 b1:5 harm:1 conclude:1 xi:2 shwartz:1 search:2 table:7 learn:9 pbt:1 composing:1 inherently:1 robust:1 spaa:1 poly:25 domain:2 main:9 dense:1 linearly:2 noise:3 prx:3 allowed:1 x1:1 martingale:2 wiley:1 precision:2 watanabe:1 plong:1 comput:3 kxk2:1 tied:1 jmlr:1 third:1 learns:1 down:2 theorem:11 xt:2 pac:8 showing:1 learnable:1 workshop:2 vapnik:2 sequential:10 valiant:1 margin:30 kx:1 easier:1 rejection:1 arriaga:1 simply:2 tracking:1 applies:1 acpr:6 corresponds:3 minimizer:2 satisfies:4 constantly:2 acm:2 ma:1 goal:1 blumer:1 consequently:2 toss:1 feasible:1 included:1 determined:2 except:1 uniformly:1 justify:1 lemma:30 kearns:2 called:4 total:1 mathias:1 vote:7 support:3 collins:1 karmarkar:1 d1:2 |
3,805 | 4,445 | Linear Submodular Bandits
and their Application to Diversified Retrieval
Yisong Yue
iLab, Heinz College
Carnegie Mellon University
[email protected]
Carlos Guestrin
Machine Learning Department
Carnegie Mellon University
[email protected]
Abstract
Diversified retrieval and online learning are two core research areas in the design
of modern information retrieval systems. In this paper, we propose the linear submodular bandits problem, which is an online learning setting for optimizing a general class of feature-rich submodular utility models for diversified retrieval. We
present an algorithm, called LSBG REEDY, and prove that it efficiently converges
to a near-optimal model. As a case study, we applied our approach to the setting
of personalized news recommendation, where the system must recommend small
sets of news articles selected from tens of thousands of available articles each day.
In a live user study, we found that LSBG REEDY significantly outperforms existing
online learning approaches.
1
Introduction
User feedback has become an invaluable source of training data for optimizing information retrieval
systems in a rapidly expanding range of domains, most notably content recommendation (e.g., news,
movies, ads). When designing retrieval systems that adapt to user feedback, two important challenges arise. First, the system should recommend optimally diversified content that maximizes
coverage of the information the user finds interesting (to maximize positive feedback). Second, the
system should make exploratory recommendations in order to learn a reliable model from feedback.
Challenge 1: diversification. In most retrieval settings, the retrieval system must recommend sets of
articles, rather than individual articles. Furthermore, the recommended articles should be well diversified. This is motivated by the principle that recommending redundant articles leads to diminishing
returns on utility, since users need to consume redundant information only once. This notion of diminishing returns is well-captured by submodular utility models, which have become an increasingly
popular approach to modeling diversified retrieval tasks in recent years [24, 25, 18, 3, 21, 9, 16].
Challenge 2: feature-based exploration. In most retrieval settings, users typically only provide
feedback on the articles recommended to them. This partial feedback issue leads to an inherent
tension between exploration and exploitation when deciding which articles to recommend to the
user. Furthermore, it is typically desirable to learn a feature-based model that can generalize to new
or previously unseen articles and users; this is often called the contextual bandits problem [13, 15, 7].
Although there exist approaches that have addressed these challenges individually, to our knowledge
there is no single approach which solves both simultaneously and is also practical to implement. For
instance, existing online approaches for optimizing submodular functions typically assume a featurefree model, and thus cannot generalize easily [18, 22, 23]. Such approaches measure performance
relative to the single best set (e.g., of articles). Thus, they are not suitable for many retrieval settings
since the set of available articles can change frequently (e.g., news recommendation).
In this paper, we address both challenges in a unified framework. We propose the linear submodular
bandits problem, which is an online learning setting for optimizing a general class of feature-based
1
submodular utility models. To make learning practical, we represent the benefit of adding an article
to an existing set of selected articles as a linear model with respect to the user?s preferences. This
class of models encompasses several existing information coverage utility models for diversified
retrieval [24, 25, 9], and allows us to learn flexible models that can generalize to new predictions.
Similar to the contextual bandits setting considered in [15], our setting can be characterized as a
feature-based exploration-exploitation problem, where the uncertainty lies in how best to model user
interests using the available features. In contrast to [15], we aim to recommend optimally diversified
sets of articles rather than just single articles. From that standpoint, modeling this additional layer of
complexity in the bandit setting is our main technical contribution. We present an algorithm, called
LSBG REEDY, to optimize this exploration-exploitation trade-off. When learning a d-dimensional
model to recommend sets
p of L articles for T time steps, we prove that LSBG REEDY incurs regret that grows as O(d LT ) (ignoring log factors). This regret matches the convergence rates of
analogous algorithms for the conventional linear bandits setting [1, 20, 8].
As a case study, we applied our approach to the setting of personalized news recommendation [9,
15, 16]. In addition to simulation experiments, we conducted a live user study over a period of
ten rounds, where in each round the retrieval system must recommend a small set of news articles
selected from tens of thousands of available articles for that round. We compared against existing
online learning approaches that either employ no exploration [9], or learn to recommend only single
articles (and thus do not model diversity) [15]. Compared to previous approaches, we find that
LSBG REEDY can significantly improve the performance of the retrieval system even when learning
for a limited number of rounds. Our empirical results demonstrate the advantage of jointly tackling
the challenges of diversification and feature-based exploration, as well as showcase the practicality
of our approach.
2
Submodular Information Coverage Models
Before presenting our online learning setting, we first describe the class of utility functions that we
optimize over. Throughout this paper, we use personalized news recommendation as our motivating
example. In this setting, utility corresponds to the amount of interesting information covered by the
set of recommended articles.
Suppose that news articles are represented using a set of d ?topics? or ?concepts? that we wish to
cover (e.g., the Middle East or the weather).1 Intuitively, recommending two articles that cover
highly overlapping topics might not be more beneficial than recommending just one of the articles
? this is the notion of diminishing returns we wish to capture in our information coverage model.
Two key properties we will exploit are that our utility functions are monotone and submodular. A
set function F mapping sets of recommended articles A to real values (e.g., the total information
covered by A) is monotone and submodular if and only if
F (A [ {a})
F (A)
and
F (A [ {a})
F (A)
F (B [ {a})
F (B),
respectively, for all articles a and sets A ? B. In other words, since A is smaller than B, the benefit
of adding a to A is larger than the benefit of adding a to B. Submodularity provides a natural
framework for characterizing diminishing returns in information coverage, since the gain of adding
a second (redundant) article on a topic will be smaller than the gain of adding the first.
For each topic i, let Fi (A) be a monotone submodular function corresponding to how well the
recommended articles A cover topic i. We write the total utility of recommending A as
F (A|w) = w> hF1 (A), . . . , Fd (A)i,
(1)
where w 2
is a parameter vector indicating the user?s interest level in each topic. Thus, F (A|w)
corresponds to the weighted information coverage of A, and depends on the preferences of the particular user. Since sums of monotone submodular functions are themselves monotone submodular,
this implies that F (A|w) is also monotone submodular (this would not hold if w has negative components). When making recommendations, the goal then is to select the A that maximizes F (A|w).
This class of information utility models encompasses several existing models of information coverage for diversified retrieval [24, 25, 9].
<d+
1
In general, these features can represent any ?nugget of information?, such as a single word [24, 25, 9].
2
Example: Probabilistic Coverage. As an illustrative example, we now describe the probabilistic
coverage model proposed in [9]. This will also be the coverage model used in our case study (see
Section 5). Each article a has some probability P (i|a) of covering topic i.2 Assuming each article
a 2 A has an independent probability of covering each topic, then we can write Fi (A) as
Y
Fi (A) = 1
(1 P (i|a)),
(2)
a2A
which corresponds to the probability that topic i is covered by at least one article in A. It is straightforward to check that Fi in (2) is monotone submodular [9].
Local Linearity. One attractive property of F (A|w) in (1) is that the incremental gains are locally
linear. In particular, the incremental gain of adding a to A can be written as w> (a|A), where
(a|A) = h F1 (A [ {a})
F1 (A) , . . . , Fd (A [ {a})
Fd (A) i.
(3)
In other words, the i-th component of (a|A) corresponds to the incremental coverage (i.e., submodular advantage) of topic i by article a, conditioned on articles A having already been selected.
This property will be exploited by our online learning algorithm presented in Section 4.
Optimization. Another attractive property of monotone submodular functions is that the myopic
greedy algorithm is guaranteed to produce a near-optimal solution [17]. For any budget L (e.g.,
L = 10 articles), the constrained optimization problem, argmaxA:|A|?L F (A|w), can be solved
greedily to produce a solution that is within a factor (1 1/e) ? 0.63 of optimal. Achieving
better than (1 1/e)OP T is known to be intractable unless P = N P [10]. In practice, the greedy
algorithm can often perform much better than this worst case guarantee (cf. [14]), and will be a
central component in our online learning algorithm.
3
Problem Formulation
We propose the linear submodular bandits problem which is described in the following. At each
time step t = 1, . . . , T , our algorithm interacts with the user in the following way:
? A set of articles At is made available to the algorithm. Each article a 2 At is represented using a set
of d basis coverage functions F1 , . . . , Fd , defined as in Section 2, which is known to the algorithm.
(1)
(L)
? The algorithm chooses a ranked set of L articles, denoted At = (at , . . . , at ), using the basis
coverage functions of the articles and the outcomes of previous time steps.
? The user provides feedback (e.g., clicks on or ignores each article), and the rewards for each recommended articles rt (At ) (4) are observed.
In order to develop our algorithm, we require a model of user behavior. We assume the user scans
the recommended articles A = (a(1) , . . . , a(L) ) one by one in top-down fashion. For each article
a(`) , the user considers the new information covered by a(`) and not covered by the above articles
A(1:` 1) (A(1:`) denotes the articles in the first ` slots). In our representation, this new information
is (a(`) |A(1:` 1) ) as in (3). The user then clicks on (or likes) a(`) with independent probability
>
(w? )
(a(`) |A(1:` 1) ), where w? is the hidden preferences of the user. Formally, for any set of
articles A chosen at time t, the rewards rt (A) can be written as the sum of rewards at each slot,
rt (A) =
L
X
(`)
rt (A).
(4)
`=1
(`)
We assume each rt is an independent random variable bounded in [0, 1] and satisfies
h
i
(`)
>
E rt (A) = (w? )
(a(`) |A(1:` 1) ),
(5)
where w? is a weight vector unknown to the algorithm with kw? k ? S. In other words, the expected
reward in each slot is realizable, linear in (a(`) |A(1:` 1) ), and independent of the other slots. We
call this independence property conditional submodular independence, which we will leverage in
2
E.g., the topics and coverage probabilities can be derived from a topic model such as LDA [4].
3
Algorithm 1 LSBG REEDY
1: input: , ?t
2: for t = 1, . . . , T do
?
?>
P 1 PL
(`)
(`)
3: Mt
Id + t? =1
//covariance matrix
?
?
`=1
Pt 1 PL
(`) (`)
4: bt
??
//aggregate feedback so far
?
? =1
`=1 r
5: wt
Mt 1 bt //linear regression using previous feedback as training data
6: At
;
7: for ` = 1, . . . , L do
>
8:
8a 2 At \ A(t) : ?a
wq
(a|At ) //compute mean estimate of utility gain
t
9:
10:
8i 2 At \ A(t) : ca
set
(`)
at
(`)
?t
(a|At )> Mt
1
(a|At ) //compute confidence interval
argmax
upper confidence bound
? a (?a + ca ) ?//select article with
n highest
o
(`)
(1:` 1)
(`)
at A t
, At
A t [ at
11:
store t
12:
end for
(1)
(L)
13:
recommend articles At in the order selected, and observe rewards r?t , . . . , r?t for each slot
14: end for
our analysis. While conditional submodular independence may seem ideal, we will show in our user
study experiments that it is not required for our proposed algorithm to achieve good performance.
Equations (4) and (5) imply that E[rt (A)] = F (A|w? ) for F defined as in (1). Thus, E[rt ] is
monotone submodular, and a clairvoyant system with perfect knowledge of w? can greedily select
articles to achieve (expected) reward at least (1 1/e)OP T , where OP T denotes the total expected
reward of the optimal recommendations for t = 1, . . . , T . Let A?t denote the optimal set of articles
at time t. We quantify performance using the following notion of regret which we call greedy regret,
RegG (T ) =
4
?
1
1
e
?X
T
E [rt (A?t )]
t=1
T
X
t=1
rt (At ) ?
?
1
1
e
?
OP T
T
X
rt (At ).
(6)
t=1
Algorithm and Main Results
A central question in the study of bandit problems is how best to balance the trade-off between
exploration and exploitation (cf. [15]). To minimize regret (6), an algorithm must exploit its past
experience to recommend sets of articles that appear to maximize information coverage. However,
topics that appear good (i.e., interesting to the user) may actually be suboptimal due to imprecision in the algorithm?s knowledge. In order to avoid this situation, the algorithm must explore by
recommending articles about seemingly poor topics in order to gather more information about them.
In this section, we present an algorithm, called LSBG REEDY, which automatically trades off between exploration and exploitation (Algorithm 1). LSBG REEDY balances exploration and exploitation using upper confidence bounds on the estimated gain in utility, and builds upon upper confidence
bound style algorithms for the conventional linear bandits setting [8, 20, 15, 7, 1]. Intuitively, the
algorithm can be decomposed into the following components.
Training a Model. Since we employ a linear model, at each time t, we can fit an estimate wt of the
true w? via linear regression on the previous feedback. Lines 3?5 in Algorithm 1 describe this step,
(`)
where ? denotes the incremental coverage features of the article selected at time ? and slot `, and
(`)
r?? denotes the associated reward. Note that in Line 3 is the standard regularization parameter.
Estimating Incremental Coverage. Given wt , we can now estimate the incremental gain of adding
any article a to an existing set of results A. As discussed in Section 3, the true (expected) incremental
>
gain is (w? )
(a|A). Our algorithm?s estimate is wt> (a|A) (Line 8). If our algorithm were to
purely exploit prior knowledge, then it would greedily choose articles that maximize wt> (a|A).3
Computing Confidence Intervals. Of course, each wt is an imprecise estimate of the true w? .
Given such uncertainty, a natural approach is to use confidence intervals which contain the true w?
3
Note that wt may have negative components, which would make F (?|wt ) not monotone submodular. However, regret is measured by F (?|w? ), which is monotone submodular. We show in our analysis that having
negative components in wt does not hinder our ability to converge efficiently to w? in a regret sense.
4
(1)
t
1
2
At
a1
b1
(1)
rt
1
1
(2)
At
a2
b3
(2)
rt
0
1
Figure 1: Illustrative example of LSBG REEDY for L = 2 and 2 days. Each day comprises 3 articles
covering 4 topics, which are depicted in the two plots. Each row in the table describes the choices of
LSBG REEDY and the resulting feedback. In day 1, LSBG REEDY recommends articles to explore
topics 1, 2, and 3, and the user indicates liking a1 and disliking a2. In day 2, LSBG REEDY recommends b1 to exploitatively cover topic 1, and b3 to both cover topic 1 and explore topic 4.
with some target confidence (e.g., 95%). Our algorithm?s uncertainty in the gain of article a given set
A depends directly to how much feedback we have collected regarding prominent topics in (a|A).
In our linear setting, uncertainty is measured using the inverse covariance matrix Mt 1 of the submodular features of the previously selected articles (Line 9). If our algorithm
were to purely explore,
q
then it would greedily select articles that have maximal uncertainty
(a|A)> Mt 1 (a|A).
Balancing Exploration and Exploitation. In order to achieve low regret, LSBG REEDY greedily
selects articles that maximize a compromise between estimated gain and uncertainty (Line 10), with
?t controlling the tradeoff. For any 2 (0, 1), Lemma 3 in Appendix A.2 provides sufficient
conditions on ?t for constructing confidence intervals,
q
wt> (a|A) ? ?t
(a|A)> Mt 1 (a|A) ? wt> (a|A) ? ?t k (a|A)kM 1 ,
(7)
t
that contain the true value, (w? )> (a|A), with probability at least 1
. In this sense, Line 10
maximizes the upper confidence bound on the true expected reward. Figure 1 provides an illustrative
example of the behavior of LSBG REEDY.
We now
p state our main result, which essentially bounds the greedy regret (6) of LSBG REEDY as
O(d T L) (ignoring log factors). This means that the average loss incurred
per slot and per day by
p
LSBG REEDY relative to (1 1/e)OP T decreases at a rate of O(d/ T L).
Theorem 1. For L ? d, = L, and ?t defined as
q
p
?t = 2 log 2 det(Mt )1/2 det( Id ) 1/2 / + S ,
(8)
with probability at least 1
RegG (T ) ? ?T
p
, LSBG REEDY achieves greedy regret (6) bounded by
8T L log det(MT +1 ) +
s
2(1 + T L) log
?p
1 + TL
/2
?
?
?
??
p
TL
= O Sd T L log
.
The proof of Theorem 1 is presented in Appendix A in the supplementary material. In practice, the
choice of ?t in (8) may be overly conservative. As we show in our experiments, more aggressive
choices of ?t can often lead to faster convergence.
5
Empirical Analysis: Case Study in News Recommendation
We applied LSBG REEDY to the setting of personalized news recommendation (cf. [9, 15, 16]),
where the system is tasked with recommending sets of articles that maximally cover the interesting
information of the available articles. The user provides feedback (e.g., by indicating that she likes
or dislikes each article), and the goal is to maximize the total positive feedback by personalizing
to the user. We conducted both simulation experiments as well as a live user study. Since real
users are unlikely to behave exactly according to our modeling assumptions (e.g., obey conditional
submodular independence), our user study tests the effectiveness of our approach in settings beyond
those considered in our theoretical analysis.
5.1
Simulations
Data. We ran simulations using both synthetic datasets as well as the blog dataset from [9]. For
each setting, we generated a hidden true preference vector w? . For the synthetic data, all articles
5
Figure 2: Simulation results comparing LSBG REEDY (red), RankLinUCB (black thick), Multiplicative Weighting (black thin), and ?-Greedy (dashed thin). The middle column computes regret versus
the clairvoyant greedy solution, and not (1 1/e)OP T . Unless specified, results are for L = 5.
were randomly generated using d = 25 topics, and w? was randomly generated and re-scaled so the
most likely articles were liked with probability ? 75%. For the blog dataset, articles are represented
using d = 100 topics generated using Latent Dirichlet Allocation [4], and w? was derived from
a preliminary version of our user study. Our simulated user behaves according to the user model
described in Section 3. We use probabilistic coverage (2) as the submodular basis functions.
Competing Methods. We compared LSBG REEDY against the following online learning algorithms. Note that all learning algorithms use the same underlying submodular utility model.
? Multiplicative Weighting (MW) as proposed in [9], which does not employ exploration.
? RankLinUCB, which combines the LinUCB algorithm [8, 20, 15, 7, 1] with Ranked Bandits [18, 22].
RankLinUCB is similar to LSBG REEDY except that it maintains a separate weight vector per slot
since it employs a reduction to L separate linear bandits (one per slot). In a sense, this is the natural
application of existing approaches to our setting.4
? ?-Greedy, which randomly explores with probability ?, and exploits otherwise [15].
Results. Figure 2 shows a representative sample of our simulation results.5 We see that both ?Greedy and Multiplicative Weighting achieve significantly worse results than LSBG REEDY. We
also observe the performance of Multiplcative Weigthing diverge in the synthetic dataset, which is
due to the fact that it does not employ exploration. RankLinUCB is more competitive, and achieves
matching performance in the synthetic dataset. We also see that RankLinUCB is more sensitive to
the choice of ?. Interestingly, both LSBG REEDY and RankLinUCB approach the same performance
when recommending L = 10 articles. This can be explained by the user?s interests being saturated
by 10 articles, and suggests that the bound in Theorem 1 could potentially be further refined. Additional details can be found in Appendix B in the supplementary material.
5.2
User Studies
Design. The design of our study is similar to the personalization study conducted in [9]. We presented each user with ten articles per day over ten days from January 18, 2009 to January 27, 2009.
Each day, the articles are selected using an interleaving of two policies (described below). The articles are displayed as a title with its contents viewable via a preview pane. The user is instructed
p
One can showpthat RankLinUCB achieves greedy regret (6) that grows as O(dL T ) (ignoring log factors),
which is a factor L worse than the regret guarantee of LSBG REEDY.
5
For all methods, we find performance to be relatively stable w.r.t. the tuning parameters (e.g., ?t for
LSBG REEDY). Unless specified, we set all parameters to values that achieve good results for their respective
algorithms. In particular we set ?t = 1 for LSBG REEDY, ?t = 0.6 for RankLinUCB, = 0.9 for MW,
and ? = 0.1 for ?-Greedy. LSBG REEDY, RankLinUCB, and ?-Greedy train linear models with regularization
parameter , which we kept constant at = 1.
4
6
Figure 3: Displaying normalized learned preferences of LSBG REEDY (dark) and MW (light) for
two user study sessions. In the left session, MW overfits to the ?world? topic. In the right session,
the user likes very few articles, and MW does not discover any topics that interest the user.
C OMPARISON
LSBG REEDY vs Static Baseline
LSBG REEDY vs Mult. Weighting
LSBG REEDY vs RankLinUCB
#S ESSIONS
24
26
27
W IN /T IE /L OSE
24 / 0 / 0
24 / 1 / 1
21 / 2 / 4
G AIN PER DAY
1.07
0.54
0.58
% OF L IKES
63% (67%)
57% (63%)
57% (61%)
Table 1: User study comparing LSBG REEDY with competing algorithms. The parenthetical values
in the last column are computed ignoring clicks on articles jointly recommended by both algorithms
(see Section 5.2). All results are statistically significant with 95% confidence.
to briefly skim each article to get a sense of its content and, one by one, mark each article as ?interested in reading in detail? (like), or ?not interested? (dislike). As in [9], for each decision, the user
is told to take into account the articles shown above in the current day, so as to capture the notion
of incremental coverage. For example, a user might be interested in reading an article regarding the
Middle East appearing at the top slot, and would mark it as ?interested.? However, if several very
similar articles appear below it, the user may mark the subsequent articles as ?not interested.?
Evaluation. For each day, we generate an interleaving of recommendations from two algorithms.
Interleaving allows us to make paired comparisons such that we simultaneously control for the
particular user and particular day (certain days may contain more or less interesting content to the
user than other days). Like other interleaving approaches [19], our approach maintains a notion of
fairness so that both competing algorithms recommend the same amount of content. After each day,
the user?s feedback is collected and given to the two competing algorithms. Additional details of
our experimental setup can be found in Appendix C in the supplementary material.
Data. In order to distinguish the gains of the algorithms from other effects (such as imperfections in
the features, or having too high a dimension to converge), we performed dimensionality reduction.
We created 18 genres (examples shown in Figure 3), labeled relevant articles and trained a model
via linear regression for each genre. Note that many articles are relevant to multiple genres.
We compared LSBG REEDY against the static baseline (i.e., no personalization), Multiplicative
Weighting (MW) from [9], and RankLinUCB. We evaluated each comparison setting using approximately twenty five participants, most of whom are graduate students or young professionals.
Results. Table 1 describes our results. We first aggregated per user, and then aggregated over all
users. For each user, we computed three statistics: (1) whether LSBG REEDY won, tied, or lost in
terms of total number of liked articles, (2) the difference in liked articles per day, and (3) the fraction
of liked articles recommended by LSBG REEDY. Jointly recommended articles can be either counted
as half to each algorithm or ignored (these results are shown in parentheticals in Table 1).
Overall, about 90% of users preferred recommendations by LSBG REEDY over the competing algorithms. On average, LSBG REEDY obtains about one additional liked article per day and 63%
of all liked articles versus the static baseline, and about half an additional liked article per day and
57% of all liked articles versus the two competing learning algorithms. The gains we observe are
all statistically significant with 95% confidence, and show that LSBG REEDY can be effective even
when the assumptions in our theoretical analysis may not be satisfied.
Figure 3 shows the learned preferences by LSBG REEDY and MW on two sessions. Since MW does
not employ exploration, it can either overfit to its previous experience and not find new topics that
interest the user (left plot), or fail to discover any good topics (right plot). We do not include a
comparison with RankLinUCB since it learns L preference vectors, which are difficult to visualize.
7
6
Related Work
Diversified Retrieval. We are chiefly interested in training flexible submodular utility models,
since such models yield practical algorithmic approaches. At one extreme are feature-free models
that do not require training. However, such models are limited to unpersonalized settings that ignore
context, such as recommending a global set of blogs to monitor [14]. On the other hand, methods that
use feature-rich models typically either employ unsupervised training [24] or require fine-grained
subtopic labels [25]. Such learning approaches cannot easily adapt to new domains. One exception
is [9], whose proposed online learning approach does not incorporate exploration. As shown in our
experiments, this significantly inhibits the learning ability of their approach.
Beyond submodular models of information coverage, other approaches include methods that balance
relevance and novelty [5, 26, 6] and graph-based methods [27]. For such models, it remains a
challenge to design provably efficient online learning algorithms.
Bandit Learning. From the perspective of our work, existing bandit approaches can be categorized
along two dimensions: single-prediction versus set-prediction, and feature-based versus feature-free.
Most feature-based settings are designed to predict single results, rather than sets of results. Of such
settings, the most relevant to ours is the linear stochastic bandits setting [8, 20, 15, 7, 1], which
we build upon in our approach. One limitation here is the assumption of realizability ? that the
?true? user model lies within our class. It may be possible to develop more robust algorithms for our
submodular bandits setting by building upon algorithms with more general guarantees (e.g., [2]).
Most set-based settings, such as bandit submodular optimization or the general bandit slate problem,
assume a feature-free model [18, 22, 23, 12]. As such, performance is quantified relative to a fixed
set of articles, which is not appropriate for many retrieval settings (e.g., news recommendation).
One exception is [21], which assumes that document and user models lie within a metric space.
However, it is unclear how to incorporate our submodular features into their setting.
7
Discussion of Limitations and Future Work
Submodular Basis Features. Our approach requires access to submodular basis functions as features. In practice these basis features are often derived using various topic modeling or dimensionality reduction techniques. However, the resulting features are almost always noisy or biased.
Furthermore, one expects that different users will be better modeled using different basis features.
As such, one important direction for future work is to learn the appropriate basis features from user
feedback, which is similar to the setting of interactive topic modeling [11].
Moreover, user behavior is likely to be influenced by many factors beyond those well-modeled by
submodular basis features. For example, the probability of the user liking a certain article could be
influenced by the time of day, or day of the week. A more unified approach would be to incorporate
both these standard features as well as submodular basis features in a joint model.
Curse of Dimensionality. The convergence rate of LSBG REEDY depends linearly on the number
of features d (which appears unavoidable without further assumptions). Thus, our approach may not
be practical for settings that use a very large number of features. One possible extension is to jointly
learn from multiple users simultaneously. If users tend to have similar preferences, then learning
jointly from multiple users may yield convergence rates that are sub-linear in d.
8
Conclusion
We proposed an online learning setting for optimizing a general class of submodular functions.
This setting is well-suited for modeling diversified retrieval systems that interactively learn from
user feedback. We presented an algorithm, LSBG REEDY, and proved that it efficiently converges
to a near-optimal model. We conducted simulations as well as user studies in the setting of news
recommendation, and found that LSBG REEDY outperforms competing online learning approaches.
Acknowledgements. This work was funded in part by ONR (PECASE) N000141010672 and ONR Young
Investigator Program N00014-08-1-0752. The authors also thank Khalid El-Arini, Joey Gonzalez, Sue Ann
Hong, Jing Xiang, and the anonymous reviewers for their helpful comments.
8
References
[1] Y. Abbasi-Yadkori, D. Pal, and C. Szepesvari. Online least squares estimation with self-normalized processes: An application to bandit problems, 2011. http://arxiv.org/abs/1102.2670.
[2] J. Abernathy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for bandit linear
optimization. In Conference on Learning Theory (COLT), 2008.
[3] R. Agrawal, S. Gollapudi, A. Halverson, and S. Ieong. Diversifying search results. In ACM Conference
on Web Search and Data Mining (WSDM), 2009.
[4] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research
(JMLR), 3:993?1022, 2003.
[5] J. Carbonell and J. Goldstein. The use of MMR, diversity-based re-ranking for reordering documents and
producing summaries. In ACM Conference on Information Retrieval (SIGIR), 1998.
[6] H. Chen and D. Karger. Less is more. In ACM Conference on Information Retrieval (SIGIR), 2006.
[7] W. Chu, L. Li, L. Reyzin, and R. Schapire. Contextual bandits with linear payoff functions. In Conference
on Artificial Intelligence and Statistics (AISTATS), 2011.
[8] V. Dani, T. Hayes, and S. Kakade. Stochastic linear optimization under bandit feedback. In Conference
on Learning Theory (COLT), 2008.
[9] K. El-Arini, G. Veda, D. Shahaf, and C. Guestrin. Turning down the noise in the blogosphere. In ACM
Conference on Knowledge Discovery and Data Mining (KDD), 2009.
[10] U. Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 45(4):634?652,
1998.
[11] Y. Hu, J. Boyd-Graber, and B. Satinoff. Interactive topic modeling. In Annual Meeting of the Association
for Computational Linguistics (ACL), 2011.
[12] S. Kale, L. Reyzin, and R. Schapire. Non-stochastic bandit slate problems. In Neural Information Processing Systems (NIPS), 2010.
[13] J. Langford and T. Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In Neural
Information Processing Systems (NIPS), 2007.
[14] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective outbreak
detection in networks. In ACM Conference on Knowledge Discovery and Data Mining (KDD), 2007.
[15] L. Li, W. Chu, J. Langford, and R. Schapire. A contextual-bandit approach to personalized news article
recommendation. In World Wide Web Conference (WWW), 2010.
[16] L. Li, D. Wang, T. Li, D. Knox, and B. Padmanabhan. Scene: A scalable two-stage personalized news
recommendation system. In ACM Conference on Information Retrieval (SIGIR), 2011.
[17] G. Nemhauser, L. Wolsey, and M. Fisher. An analysis of the approximations for maximizing submodular
set functions. Mathematical Programming, 14:265?294, 1978.
[18] F. Radlinski, R. Kleinberg, and T. Joachims. Learning diverse rankings with multi-armed bandits. In
International Conference on Machine Learning (ICML), 2008.
[19] F. Radlinski, M. Kurup, and T. Joachims. How does clickthrough data reflect retrieval quality? In ACM
Conference on Information and Knowledge Management (CIKM), 2008.
[20] P. Rusmevichientong and J. Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations Research, 35(2):395?411, 2010.
[21] A. Slivkins, F. Radlinski, and S. Gollapudi. Learning optimally diverse rankings over large document
collections. In International Conference on Machine Learning (ICML), 2010.
[22] M. Streeter and D. Golovin. An online algorithm for maximizing submodular functions. In Neural
Information Processing Systems (NIPS), 2008.
[23] M. Streeter, D. Golovin, and A. Krause. Online learning of assignments. In Neural Information Processing Systems (NIPS), 2009.
[24] A. Swaminathan, C. Mathew, and D. Kirovski. Essential pages. In The IEEE/WIC/ACM International
Conference on Web Intelligence (WI), 2009.
[25] Y. Yue and T. Joachims. Predicting diverse subsets using structural svms. In International Conference on
Machine Learning (ICML), 2008.
[26] C. Zhai, W. W. Cohen, and J. Lafferty. Beyond independent relevance: methods and evaluation metrics
for subtopic retrieval. In ACM Conference on Information Retrieval (SIGIR), 2003.
[27] X. Zhu, A. Goldberg, J. V. Gael, and D. Andrzejewski. Improving diversity in ranking using absorbing
random walks. In NAACL Conference on Human Language Technologies (HLT), 2007.
9
| 4445 |@word exploitation:7 version:1 middle:3 briefly:1 km:1 hu:1 simulation:7 covariance:2 incurs:1 reduction:3 karger:1 ours:1 interestingly:1 document:3 outperforms:2 existing:9 past:1 current:1 contextual:5 comparing:2 tackling:1 chu:2 must:5 written:2 subsequent:1 kdd:2 lsbg:42 plot:3 designed:1 v:3 greedy:13 selected:8 half:2 intelligence:2 core:1 blei:1 provides:5 preference:8 org:1 zhang:1 five:1 mathematical:1 along:1 become:2 clairvoyant:2 prove:2 combine:1 notably:1 expected:5 behavior:3 themselves:1 frequently:1 multi:2 heinz:1 wsdm:1 decomposed:1 automatically:1 curse:1 armed:2 estimating:1 linearity:1 bounded:2 maximizes:3 underlying:1 discover:2 moreover:1 unified:2 guarantee:3 interactive:2 exactly:1 scaled:1 control:1 appear:3 producing:1 positive:2 before:1 local:1 sd:1 id:2 approximately:1 might:2 black:2 acl:1 quantified:1 suggests:1 limited:2 range:1 statistically:2 graduate:1 practical:4 practice:3 regret:13 implement:1 lost:1 area:1 empirical:2 significantly:4 weather:1 matching:1 imprecise:1 word:4 confidence:11 mult:1 argmaxa:1 boyd:1 get:1 cannot:2 context:1 live:3 optimize:2 conventional:2 www:1 reviewer:1 maximizing:2 joey:1 straightforward:1 kale:1 sigir:4 exploratory:1 notion:5 analogous:1 pt:1 suppose:1 target:1 user:61 controlling:1 programming:1 goldberg:1 designing:1 showcase:1 labeled:1 observed:1 solved:1 capture:2 worst:1 thousand:2 wang:1 news:14 trade:3 highest:1 decrease:1 ran:1 complexity:1 reward:9 hinder:1 trained:1 compromise:1 purely:2 upon:3 basis:10 easily:2 joint:1 slate:2 represented:3 various:1 genre:3 train:1 describe:3 effective:2 artificial:1 aggregate:1 outcome:1 refined:1 whose:1 larger:1 supplementary:3 consume:1 otherwise:1 ability:2 statistic:2 unseen:1 jointly:5 noisy:1 online:17 seemingly:1 advantage:2 agrawal:1 propose:3 maximal:1 relevant:3 rapidly:1 reyzin:2 achieve:5 gollapudi:2 convergence:4 jing:1 produce:2 incremental:8 converges:2 perfect:1 liked:8 develop:2 measured:2 op:6 solves:1 coverage:20 c:1 implies:1 quantify:1 direction:1 submodularity:1 thick:1 stochastic:3 exploration:14 human:1 material:3 require:3 f1:3 preliminary:1 anonymous:1 extension:1 pl:2 hold:1 considered:2 deciding:1 mapping:1 algorithmic:1 visualize:1 predict:1 week:1 achieves:3 a2:2 estimation:1 label:1 title:1 sensitive:1 individually:1 ain:1 weighted:1 dani:1 imperfection:1 always:1 aim:1 rather:3 avoid:1 derived:3 joachim:3 she:1 check:1 indicates:1 contrast:1 greedily:5 baseline:3 realizable:1 sense:4 helpful:1 el:2 typically:4 bt:2 unlikely:1 diminishing:4 hidden:2 bandit:27 selects:1 interested:6 provably:1 issue:1 overall:1 flexible:2 colt:2 denoted:1 constrained:1 once:1 having:3 ng:1 kw:1 ike:1 unsupervised:1 fairness:1 thin:2 icml:3 future:2 recommend:11 inherent:1 employ:7 viewable:1 modern:1 randomly:3 few:1 simultaneously:3 individual:1 argmax:1 preview:1 ab:1 detection:1 interest:5 fd:4 highly:1 mining:3 khalid:1 evaluation:2 saturated:1 extreme:1 personalization:2 light:1 myopic:1 partial:1 experience:2 respective:1 unless:3 walk:1 re:2 parenthetical:2 theoretical:2 leskovec:1 instance:1 column:2 modeling:7 cover:7 assignment:1 cost:1 subset:1 expects:1 conducted:4 too:1 pal:1 optimally:3 motivating:1 synthetic:4 chooses:1 knox:1 explores:1 international:4 ie:1 probabilistic:3 off:3 told:1 diverge:1 pecase:1 abbasi:1 central:2 yisong:1 satisfied:1 choose:1 unavoidable:1 interactively:1 arini:2 reflect:1 andrzejewski:1 worse:2 management:1 style:1 return:4 li:4 aggressive:1 account:1 diversity:3 student:1 rusmevichientong:1 ranking:4 ad:1 depends:3 multiplicative:4 performed:1 overfits:1 hazan:1 red:1 competitive:1 carlos:1 maintains:2 participant:1 nugget:1 contribution:1 minimize:1 square:1 efficiently:3 yield:2 generalize:3 influenced:2 hlt:1 against:3 associated:1 proof:1 static:3 gain:12 dataset:4 proved:1 popular:1 knowledge:7 dimensionality:3 actually:1 goldstein:1 appears:1 day:21 tension:1 maximally:1 subtopic:2 formulation:1 evaluated:1 furthermore:3 just:2 stage:1 swaminathan:1 langford:2 overfit:1 hand:1 shahaf:1 web:3 overlapping:1 glance:1 lda:1 quality:1 grows:2 building:1 b3:2 effect:1 concept:1 true:8 contain:3 normalized:2 naacl:1 regularization:2 imprecision:1 attractive:2 round:4 self:1 covering:3 illustrative:3 won:1 hong:1 prominent:1 presenting:1 demonstrate:1 invaluable:1 personalizing:1 fi:4 absorbing:1 behaves:1 mt:8 ilab:1 kurup:1 cohen:1 discussed:1 diversifying:1 association:1 mellon:2 significant:2 tuning:1 mathematics:1 session:4 submodular:39 language:1 funded:1 stable:1 access:1 recent:1 perspective:1 optimizing:5 diversification:2 store:1 certain:2 n00014:1 blog:3 onr:2 meeting:1 exploited:1 guestrin:4 captured:1 additional:5 converge:2 maximize:5 redundant:3 recommended:10 period:1 dashed:1 aggregated:2 liking:2 desirable:1 multiple:3 technical:1 match:1 adapt:2 characterized:1 faster:1 retrieval:24 a1:2 paired:1 prediction:3 scalable:1 regression:3 essentially:1 cmu:2 tasked:1 metric:2 sue:1 arxiv:1 represent:2 addition:1 fine:1 krause:2 addressed:1 interval:4 source:1 standpoint:1 biased:1 yue:2 comment:1 tend:1 lafferty:1 seem:1 effectiveness:1 call:2 jordan:1 structural:1 near:3 leverage:1 ideal:1 mw:8 recommends:2 independence:4 fit:1 competing:8 click:3 suboptimal:1 regarding:2 tradeoff:1 det:3 whether:1 motivated:1 abernathy:1 veda:1 utility:14 ignored:1 gael:1 covered:5 amount:2 dark:2 ten:5 locally:1 svms:1 generate:1 http:1 schapire:3 exist:1 estimated:2 overly:1 per:10 cikm:1 diverse:3 carnegie:2 write:2 key:1 threshold:1 achieving:1 monitor:1 kept:1 graph:1 monotone:11 fraction:1 year:1 sum:2 inverse:1 parameterized:1 uncertainty:6 throughout:1 almost:1 gonzalez:1 decision:1 appendix:4 layer:1 bound:6 guaranteed:1 distinguish:1 mathew:1 annual:1 scene:1 personalized:6 kleinberg:1 pane:1 relatively:1 inhibits:1 department:1 according:2 poor:1 beneficial:1 smaller:2 increasingly:1 describes:2 feige:1 vanbriesen:1 kakade:1 wi:1 making:1 outbreak:1 intuitively:2 explained:1 ln:1 equation:1 previously:2 remains:1 fail:1 end:2 available:6 operation:1 observe:3 obey:1 appropriate:2 appearing:1 yadkori:1 faloutsos:1 professional:1 top:2 denotes:4 cf:3 dirichlet:2 include:2 assumes:1 linguistics:1 exploit:4 practicality:1 build:2 approximating:1 already:1 question:1 rt:13 interacts:1 unclear:1 nemhauser:1 linucb:1 separate:2 thank:1 simulated:1 carbonell:1 topic:29 whom:1 considers:1 collected:2 assuming:1 modeled:2 zhai:1 balance:3 setup:1 difficult:1 potentially:1 negative:3 design:4 clickthrough:1 policy:1 unknown:1 perform:1 twenty:1 upper:4 datasets:1 padmanabhan:1 behave:1 displayed:1 january:2 situation:1 payoff:1 skim:1 required:1 specified:2 slivkins:1 learned:2 nip:4 address:1 beyond:4 below:2 reading:2 challenge:7 encompasses:2 program:1 reliable:1 suitable:1 natural:3 ranked:2 predicting:1 turning:1 zhu:1 improve:1 movie:1 technology:1 imply:1 created:1 realizability:1 prior:1 epoch:1 acknowledgement:1 discovery:2 dislike:2 relative:3 xiang:1 loss:1 reordering:1 a2a:1 interesting:5 limitation:2 allocation:2 wolsey:1 versus:5 halverson:1 incurred:1 gather:1 sufficient:1 article:91 principle:1 displaying:1 wic:1 balancing:1 row:1 course:1 summary:1 last:1 free:3 tsitsiklis:1 wide:1 characterizing:1 benefit:3 feedback:18 dimension:2 world:2 rich:2 computes:1 ignores:1 instructed:1 made:1 author:1 collection:1 counted:1 far:1 mmr:1 obtains:1 ignore:1 preferred:1 global:1 hayes:1 b1:2 recommending:8 search:2 latent:2 streeter:2 table:4 learn:7 szepesvari:1 golovin:2 expanding:1 ignoring:4 ca:2 robust:1 improving:1 constructing:1 domain:2 aistats:1 main:3 linearly:2 noise:1 arise:1 hf1:1 graber:1 categorized:1 representative:1 tl:2 fashion:1 ose:1 sub:1 comprises:1 wish:2 lie:3 tied:1 jmlr:1 weighting:5 learns:1 interleaving:4 young:2 grained:1 down:2 theorem:3 rakhlin:1 dl:1 intractable:1 essential:1 adding:7 conditioned:1 budget:1 ieong:1 reedy:42 chen:1 suited:1 depicted:1 lt:1 explore:4 likely:2 jacm:1 blogosphere:1 diversified:11 recommendation:16 corresponds:4 satisfies:1 chiefly:1 acm:10 slot:10 conditional:3 goal:2 ann:1 fisher:1 content:6 change:1 except:1 wt:11 lemma:1 conservative:1 called:4 total:5 experimental:1 east:2 exception:2 indicating:2 select:4 college:1 formally:1 wq:1 mark:3 novelty:1 scan:1 radlinski:3 relevance:2 investigator:1 incorporate:3 |
3,806 | 4,446 | Efficient Online Learning
via Randomized Rounding
Ohad Shamir
Microsoft Research New England
USA
[email protected]
Nicol`
o Cesa-Bianchi
DSI, Universit`
a degli Studi di Milano
Italy
[email protected]
Abstract
Most online algorithms used in machine learning today are based on variants of mirror descent or follow-the-leader. In this paper, we present an
online algorithm based on a completely different approach, which combines
?random playout? and randomized rounding of loss subgradients. As an
application of our approach, we provide the first computationally efficient
online algorithm for collaborative filtering with trace-norm constrained matrices. As a second application, we solve an open question linking batch
learning and transductive online learning.
1
Introduction
Online learning algorithms, which have received much attention in recent years, enjoy an
attractive combination of computational efficiency, lack of distributional assumptions, and
strong theoretical guarantees. However, it is probably fair to say that at their core, most of
these algorithms are based on the same small set of fundamental techniques, in particular
mirror descent and regularized follow-the-leader (see for instance [14]).
In this work we revisit, and significantly extend, an algorithm which uses a completely
different approach. This algorithm, known as the Minimax Forecaster, was introduced
in [9, 11] for the setting of prediction with static experts. It computes minimax predictions
in the case of known horizon, binary outcomes, and absolute loss. Although the original
version is computationally expensive, it can easily be made efficient through randomization.
We extend the analysis of [9] to the case of non-binary outcomes and arbitrary convex and
Lipschitz loss functions. The new algorithm is based on a combination of ?random playout?
and randomized rounding, which assigns random binary labels to future unseen instances,
in a way depending on the loss subgradients. Our resulting Randomized Rounding (R2 )
Forecaster has a parameter trading off regret performance and computational complexity,
and runs in polynomial time (for T predictions, it requires computing O(T 2 ) empirical risk
minimizers in general, as opposed to O(T ) for generic follow-the-leader algorithms). The
regret of the R2 Forecaster is determined by the Rademacher complexity of the comparison
class. The connection between online learnability and Rademacher complexity has also been
explored in [2, 1]. However, these works focus on the information-theoretically achievable
regret, as opposed to computationally efficient algorithms. The idea of ?random playout?,
in the context of online learning, has also been used in [16, 3], but we apply this idea in a
different way.
We show that the R2 Forecaster can be used to design the first efficient online learning
algorithm for collaborative filtering with trace-norm constrained matrices. While this is a
well-known setting, a straightforward application of standard online learning approaches,
such as mirror descent, appear to give only trivial performance guarantees. Moreover, our
1
regret bound matches the best currently known sample complexity bound in the batch
distribution-free setting [21].
As a different application, we consider the relationship between batch learning and transductive online learning. This relationship was analyzed in [16], in the context of binary
prediction with respect to classes of bounded VC dimension. Their main result was that
efficient learning in a statistical setting implies efficient learning in the transductive online
setting, but at an inferior rate of T 3/4 (where T is the number of rounds). The main open
question posed by that paper is whether a better rate can be obtained. Using the R2 Fore?
caster, we improve on those results, and provide an efficient algorithm with the optimal T
rate, for a wide class of losses. This shows that efficient batch learning not only implies
efficient transductive online learning (the main thesis of [16]), but also that the same rates
can be obtained, and for possibly non-binary prediction problems as well.
We emphasize that the R2 Forecaster requires computing many empirical risk minimizers
(ERM?s) at each round, which might be prohibitive in practice. Thus, while it does run
in polynomial time whenever an ERM can be efficiently computed, we make no claim that
it is a ?fully practical? algorithm. Nevertheless, it seems to be a useful tool in showing
that efficient online learnability is possible in various settings, often working in cases where
more standard techniques appear to fail. Moreover, we hope the techniques we employ
might prove useful in deriving practical online algorithms in other contexts.
2
The Minimax Forecaster
We start by introducing the sequential game of prediction with expert advice ?see [10].
The game is played between a forecaster and an adversary, and is specified by an outcome
space Y, a prediction space P, a nonnegative loss function ` : P ? Y ? R, which measures
the discrepancy between the forecaster?s prediction and the outcome, and an expert class
F. Here we focus on classes F of static experts, whose prediction at each round t does
not depend on the outcome in previous rounds. Therefore, we think of each f ? F simply
as a sequence f = (f1 , f2 , . . . ) where each ft ? P. At each step t = 1, 2, . . . of the game,
the forecaster outputs a prediction pt ? P and simultaneously the adversary reveals an
outcome yt ? Y. The forecaster?s goal is to predict the outcome sequence almost as well as
the best expert in the class F, irrespective of the outcome sequence y = (y1 , y2 , . . . ). The
performance of a forecasting strategy A is measured by the worst-case regret
!
T
T
X
X
VT (A, F) = sup
`(pt , yt ) ? inf
`(ft , yt )
(1)
y?Y T
f ?F
t=1
t=1
viewed as a function of the horizon T . To simplify notation, let L(f , y) =
PT
t=1
`(ft , yt ).
Consider now the special case where the horizon T is fixed and known in advance, the
outcome space is Y = {?1, +1}, the prediction space is P = [?1, +1], and the loss is the
absolute loss `(p, y) = |p ? y|. We will denote the regret in this special case as VTabs (A, F).
The Minimax Forecaster ?which is based on work presented in [9] and [11], see also [10]
for an exposition? is derived by an explicit analysis of the minimax regret inf A VTabs (A, F),
where the infimum is over all forecasters A producing at round t a prediction pt as a function of p1 , y1 , . . . pt?1 , yt?1 . For general online learning problems, the analysis of this quantity is intractable. However, for the specific setting we focus on (absolute loss and binary
outcomes), one can get both an explicit expression for the minimax regret, as well as an
PT
explicit algorithm, provided inf f ?F t=1 `(ft , yt ) can be efficiently computed for any sequence y1 , . . . , yT . This procedure is akin to performing empirical risk minimization (ERM)
in statistical learning. A full development of the analysis is out of scope, but is outlined in
Appendix A of the supplementary material. In a nutshell, the idea is to begin by calculating the optimal prediction in the last round T , and then work backwards, calculating the
optimal prediction at round T ? 1, T ? 2 etc. Remarkably, the value of inf A VTabs (A, F) is
exactly the Rademacher complexity RT (F) of the class F, which is known to play a crucial
role in understanding the sample complexity in statistical learning [5]. In this paper, we
2
define it as1 :
"
RT (F) = E sup
T
X
#
?t ft
(2)
f ?F t=1
where ?1 , . . . , ?T are i.i.d. Rademacher random variables, taking values ?1, +1 with equal
probability. When RT (F) = o(T ), we get a minimax regret inf A VTabs (A, F) = o(T ) which
implies a vanishing per-round regret.
In terms of an explicit algorithm, the optimal prediction pt at round t is given by a
complicated-looking recursive expression, involving exponentially many terms. Indeed, for
general online learning problems, this is the most one seems able to hope for. However, an
apparently little-known fact is that when one deals with a class F of fixed binary sequences
as discussed above, then one can write the optimal prediction pt in a much simpler way.
Letting Y1 , . . . , YT be i.i.d. Rademacher random variables, the optimal prediction at round
t can be written as
pt = E inf L (f , y1 ? ? ? yt?1 (?1) Yt+1 ? ? ? YT ) ? inf L (f , y1 ? ? ? yt?1 1 Yt+1 ? ? ? YT ) . (3)
f ?F
f ?F
In words, the prediction is simply the expected difference between the minimal cumulative
loss over F, when the adversary plays ?1 at round t and random values afterwards, and
the minimal cumulative loss over F, when the adversary plays +1 at round t, and the
same random values afterwards. We refer the reader to Appendix A of the supplementary
material for how this is derived. We denote this optimal strategy (for absolute loss and
binary outcomes) as the Minimax Forecaster (mf):
Algorithm 1 Minimax Forecaster (mf)
for t = 1 to T do
Predict pt as defined in Eq. (3)
Receive outcome yt and suffer loss |pt ? yt |
end for
The relevant guarantee for mf is summarized in the following theorem.
Theorem 1. For any class F ? [?1, +1]T of static experts, the regret of the Minimax
Forecaster (Algorithm 1) satisfies VTabs (mf, F) = RT (F).
2.1
Making the Minimax Forecaster Efficient
The Minimax Forecaster described above is not computationally efficient, as the computation of pt requires averaging over exponentially many ERM?s. However, by a martingale
argument, it is not hard to show that it is in fact sufficient to compute only two ERM?s per
round.
Algorithm 2 Minimax Forecaster with efficient implementation (mf*)
for t = 1 to T do
For i = t + 1, . . . , T , let Yi be a Rademacher random variable
Let pt := inf f ?F L (f , y1 . . . yt?1 (?1) Yt+1 . . . YT ) ? inf f ?F L (f , y1 . . . yt?1 1 Yt+1 . . . YT )
Predict pt , receive outcome yt and suffer loss |pt ? yt |
end for
Theorem 2. For any class F ? [?1, +1]T of static experts, the regret of the randomized
forecasting strategy mf* (Algorithm 2) satisfies
p
VTabs (mf*, F) ? RT (F) + 2T ln(1/?)
1
In the statistical learning literature, it is more common to scale this quantity by 1/T , but the
form we use here is more convenient for stating cumulative regret bounds.
3
with probability at least 1 ? ?. Moreover, if the predictions p = (p1 , . . . , pT ) are computed
reusing the random values Y1 , . . . , YT computed at the first iteration of the algorithm, rather
than drawing fresh values at each iteration, then it holds that
E L(p, y) ? inf L(f , y) ? RT (F)
for all y ? {?1, +1}T .
f ?F
Proof sketch. To prove the second statement, note that E[pt ]?yt = E |pt ?yt | for any fixed
yt ? {?1, +1} and pt bounded in [?1, +1], and use Thm. 1. To prove the first statement,
note that |pt ? yt | ? Ept [pt ] ? yt for t = 1, . . . , T is a martingale difference sequence with
respect to p1 , . . . , pT , and apply Azuma?s inequality.
The second statement in the theorem bounds the regret only in expectation and is thus
weaker than the first one. On the other hand, it might have algorithmic benefits. Indeed, if
we reuse the same values for Y1 , . . . , YT , then the computation of the infima over f in mf*
are with respect to an outcome sequence which changes only at one point in each round.
Depending on the specific learning problem, it might be easier to re-compute the infimum
after changing a single point in the outcome sequence, as opposed to computing the infimum
over a different outcome sequence in each round.
3
The R2 Forecaster
The Minimax Forecaster presented above is very specific to the absolute loss `(f, y) =
|f ? y| and for binary outcomes Y = {?1, +1}, which limits its applicability. We note that
extending the forecaster to other losses or different outcome spaces is not trivial: indeed,
the recursive unwinding of the minimax regret term, leading to an explicit expression and
an explicit algorithm, does not work as-is for other cases. Nevertheless, we will now show
how one can deal with general (convex, Lipschitz) loss functions and outcomes belonging to
any real interval [?b, b].
The algorithm we propose essentially uses the Minimax Forecaster as a subroutine, by
feeding it with a carefully chosen sequence of binary values zt , and using predictions ft
which are scaled to lie in the interval [?1, +1]. The values of zt are based on a randomized
rounding of values in [?1, +1], which depend in turn on the loss subgradient. Thus, we
denote the algorithm as the Randomized Rounding (R2 ) Forecaster.
To describe the algorithm, we introduce some notation. For any scalar f ? [?b, b], define
fe = f /b to be the scaled versions of f into the range [?1, +1]. For vectors f , define
e
f = (1/b)f . Also, we let ?pt `(pt , yt ) denote any subgradient of the loss function ` with respect
to the prediction pt . The pseudocode of the R2 Forecaster is presented as Algorithm 3 below,
and its regret guarantee is summarized in Thm. 3. The proof is presented in Appendix B
of the supplementary material.
Theorem 3. Suppose ` is convex and ?-Lipschitz in its first argument. For any F ? [?b, b]T
the regret of the R2 Forecaster (Algorithm 3) satisfies
r
s
1
2T
2
VT (R , F) ? ? RT (F) + ? b
+2
2T ln
(4)
?
?
with probability at least 1 ? ?.
The prediction pt which the algorithm computes is an empirical approximation to
e
e
b EYt+1 ,...,YT inf L f , z1 . . . zt?1 0 Yt+1 . . . YT ? inf L f , z1 ? ? ? zt?1 1 Yt+1 ? ? ? YT
f ?F
f ?F
by repeatedly drawing independent values to Yt+1 , . . . , YT and averaging. The accuracy of
the approximation is reflected in the precision parameter ?. A larger value of ? improves the
regret bound, but also increases the runtime of the algorithm. Thus, ? provides a trade-off
between the computational complexity of the algorithm and its regret guarantee. We note
4
Algorithm 3 The R2 Forecaster
Input: Upper
bound b on |ft |, |yt | for all t = 1, . . . , T and f ? F; upper bound ? on
supp,y?[?b,b] ?p `(p, y); precision parameter ? ? T1 .
for t = 1 to T do
pt := 0
for j = 1 to ? T do
For i = t, . . . , T , let
Yi be a Rademacher random
variable
e
Draw ? := inf L f , z1 . . . zt?1 (?1) Yt+1 . . . YT ? inf L e
f , z1 . . . zt?1 1 Yt+1 . . . YT
f ?F
f ?F
Let pt := pt + ?bT ?
end for
Predict pt
Receive outcome yt and suffer
loss `(pt , yt )
Let rt := 21 1 ? ?1 ?pt `(pt , yt ) ? [0, 1]
Let zt := 1 with probability rt , and zt := ?1 with probability 1 ? rt
end for
that even when ? is taken to be a constant fraction, the resulting algorithm still runs in
polynomial time O(T 2 c), where c is the time to compute a single ERM. In subsequent results
pertaining to this Forecaster, we will assume that ? is taken to be a constant fraction.
We end this section with a remark that plays an important role in what follows.
Remark 1. The predictions of our forecasting strategies do not depend on the ordering of
the predictions of the experts in F. In other words, all the results proven so far also hold in
a setting where the elements of F are functions f : {1, . . . , T } ? P, and the adversary has
control on the permutation ?1 , . . . , ?T of {1, . . . , T } that is used to define the prediction f (?t )
of expert f at time t.2 Also, Thm. 1 implies that the value of VTabs (F) remains unchanged
irrespective of the permutation chosen by the adversary.
4
Application 1: Transductive Online Learning
The first application we consider is a rather straightforward one, in the context of transductive online learning [6]. In this model, we have an arbitrary sequence of labeled examples
(x1 , y1 ), . . . , (xT , yT ), where only the set {x1 , . . . , xT } of unlabeled instances is known to the
learner in advance. At each round t, the learner must provide a prediction pt for the label
of yt . The true label yt is then revealed, and the learner incurs a loss `(pt , yt ). The learner?s
PT
goal is to minimize the transductive online regret t=1 `(pt , yt ) ? inf f ?F `(f (xt ), yt ) with
respect to a fixed class of predictors F of the form {x 7? f (x)}.
The work [16] considers the binary classification case with zero-one loss. Their main result is that if a class F of binary functions has bounded VC dimension d, and there exists
an efficient algorithm to perform empirical risk minimization, then one can construct an
efficientprandomized algorithm for transductive online learning, whose regret is at most
O(T 3/4 d ln(T )) in expectation. The significance of this result is that efficient batch learning (via empirical risk minimization) implies efficient learning in the transductive online
setting. This is an important result, as online learning can be computationally harder than
batch learning ?see, e.g., [8] for an example in the context of Boolean learning.
?
A major open question posed by [16] was whether one can achieve the optimal rate O( dT ),
matching the rate of a batch learning algorithm in the statistical setting. Using the R2
Forecaster, we can easily achieve the above result, as well as similar results in a strictly
more general setting. This shows that efficient batch learning not only implies efficient
transductive online learning (the main thesis of [16]), but also that the same rates can be
obtained, and for possibly non-binary prediction problems as well.
2
Formally, at each step t: (1) the adversary chooses and reveals the next element ?t of the
permutation; (2) the forecaster chooses pt ? P and simultaneously the adversary chooses yt ? Y.
5
Theorem 4. Suppose we have a computationally efficient algorithm for empirical risk minimization (with respect to the zero-one loss) over a class F of {0, 1}-valued functions with
VC dimension d. Then, in the transductive
online model, the efficient randomized forecaster
?
mf* achieves an expected regret of O( dT ) with respect to the zero-one loss.
Moreover, for an arbitrary class F of [?b, b]-valued functions with Rademacher complexity
RT (F), and any convex ?-Lipschitz loss function, if there exists a computationally efficient
algorithm for empirical risk minimization, then the R2 Forecaster is computationally
effip
cient and achieves, in the transductive online model, a regret of ?RT (F)+O(?b T ln(T /?))
with probability at least 1 ? ?.
Proof. Since the set {x1 , . . . , xT } of unlabeled examples is known, we reduce the online
transductive model to prediction with expert advice in the setting of Remark 1. This is
done by mapping each function f ? F to a function f : {1, . . . , T } ? P by t 7? f (xt ), which
is equivalent to an expert in the setting of Remarks 1. When F maps to {0, 1}, and we care
about the zero-one loss, we can use the forecaster mf* to compute randomized predictions
and apply Thm. 2 to bound the expected
? transductive online regret with RT (F). For a class
with VC dimension d, RT (F) ? O( dT ) for some constant c > 0, using Dudley?s chaining
method [12], and this concludes the proof of the first part of the theorem. The second part
is an immediate corollary of Thm. 3.
We close this section by contrasting our results for online transductive learning with those
of [7] about standard online learning. If F contains
{0, 1}-valued functions, then the optimal
?
regret bound for online learning is order of d0 T , where d0 is the Littlestone dimension of
F. Since the Littlestone dimension of a class is never smaller than its VC dimension, we
conclude that online learning is a harder setting than online transductive learning.
5
Application 2: Online Collaborative Filtering
We now turn to discuss the application of our results in the context of collaborative filtering
with trace-norm constrained matrices, presenting what is (to the best of our knowledge) the
first computationally efficient online algorithms for this problem.
In collaborative filtering, the learning problem is to predict entries of an unknown m ? n
matrix based on a subset of its observed entries. A common approach is norm regularization,
where we seek a low-norm matrix which matches the observed entries as best as possible.
The norm is often taken to be the trace-norm [22, 19, 4], although other norms have also
been considered, such as the max-norm [18] and the weighted trace-norm [20, 13].
Previous theoretical treatments of this problem assumed a stochastic setting, where the observed entries are picked according to some underlying distribution (e.g., [23, 21]). However,
even when the guarantees are distribution-free, assuming a fixed distribution fails to capture
important aspects of collaborative filtering in practice, such as non-stationarity [17]. Thus,
an online adversarial setting, where no distributional assumptions whatsoever are required,
seems to be particularly well-suited to this problem domain.
In an online setting, at each round t the adversary reveals an index pair (it , jt ) and secretely
chooses a value yt for the corresponding matrix entry. After that, the learner selects a
prediction pt for that entry. Then yt is revealed and the learner suffers a loss `(pt , yt ).
Hence, the goal of a learner is to minimize the regret with respect
to a fixed class W
PT
PT
of prediction matrices,
t=1 `(pt , yt ) ? inf W ?W
t=1 ` Wit ,jt , yt . Following reality, we
will assume that the adversary picks a different entry in each round. When the learner?s
performance is measured by the regret after all T = mn entries have been predicted, the
online collaborative filtering setting reduces to prediction with expert advice as discussed
in Remark 1.
As mentioned previously, W is often taken to be a convex class of matrices with bounded
trace-norm. Many convex learning problems, such as linear and kernel-based predictors,
as well as matrix-based predictors, can be learned efficiently both in a stochastic and an
online setting, using mirror descent or regularized follow-the-leader methods. However,
6
for reasonable choices of W, a straightforward application of these techniques can lead
to algorithms with trivial bounds. In particular, in the case of W consisting of m?? n
matrices with trace-norm at most r, standard online regret bounds would scale like O r T .
?
Since for this norm one typically has r = O mn , we get a per-round regret guarantee
p
of O( mn/T ). This is a trivial bound, since it becomes ?meaningful? (smaller than a
constant) only after all T = mn entries have been predicted.
On the other hand, based on general techniques developed in [15] and greatly extended in
[1], it can be shown that online learnability is information-theoretically possible for such W.
However, these techniques do not provide a computationally efficient algorithm. Thus, to
the best of our knowledge, there is currently no efficient (polynomial time) online algorithm,
which attain non-trivial regret. In this section, we show how to obtain such an algorithm
using the R2 Forecaster.
Consider first the transductive online setting, where the set of indices to be predicted is
known in advance, and the adversary may only choose the order and values of the entries.
It is readily seen that the R2 Forecaster can be applied in this setting, using any convex class
W of fixed matrices with bounded entries to compete against, and any convex Lipschitz loss
function. To do so, we let {ik , jk }Tk=1 be the set of entries, and run the R2 Forecaster with
respect to F = {t 7? Wit ,jt : W ? W}, which corresponds to a class of experts as discussed
in Remark 1.
What is perhaps more surprising is that the R2 Forecaster can also be applied in a nontransductive setting, where the indices to be predicted are not known in advance. Moreover,
the Forecaster doesn?t even need to know the horizon T in advance. The key idea to achieve
this is to utilize the non-asymptotic nature of the learning problem ?namely, that the game
is played over a finite m ? n matrix, so the time horizon is necessarily bounded.
The algorithm we propose is very simple: we apply the R2 Forecaster as if we are in a
setting with time horizon T = mn, which is played over all entries of the m ? n matrix. By
Remark 1, the R2 Forecaster does not need to know the order in which these m ? n entries
are going to be revealed. Whenever W is convex and ` is a convex function, we can find an
ERM in polynomial time by solving a convex problem. Hence, we can implement the R2
Forecaster efficiently.
To show that this is indeed a viable strategy, we need the following lemma, whose proof is
presented in Appendix C of the supplementary material.
Lemma 1. Consider a (possibly randomized) forecaster A for a class F whose regret after
T steps satisfies VT (A, F) ? G with probability at least 1 ? ? > 21 . Furthermore, suppose the
loss function is such that inf
sup inf `(p, y) ? `(p0 , y) ? 0. Then
0
p ?P y?Y p?P
max Vt (A, F) ? G
t=1,...,T
with probability at least 1 ? ?.
Note that a simple sufficient condition for the assumption on the loss function to hold, is
that P = Y and `(p, y) ? `(y, y) for all p, y ? P.
Using this lemma, the following theorem exemplifies how we can obtain a regret guarantee
for our algorithm, in the case of W consisting of the convex set of matrices with bounded
trace-norm and bounded entries. For the sake of clarity, we will consider n ? n matrices.
Theorem 5. Let ` be a loss function which satisfies the conditions of Lemma 1. Also, let W
consist of n ? n matrices with trace-norm at most r = O(n) and entries at most b = O(1),
suppose we apply the R2 Forecaster over time horizon n2 and all entries of the matrix. Then
with probability at least 1 ? ?, after T rounds, the algorithm achieves an average per-round
regret of at most
!
p
n3/2 + n ln(n/?)
O
uniformly over T = 1, . . . , n2 .
T
Proof. In our setting, where the adversary chooses a different entry at each round, [21,
Theorem 6] implies that for the class W 0 of all matrices with trace-norm at most r = O(n),
7
it holds that RT (W 0 )/T ? O(n3/2 /T ). Therefore, Rn2 (W 0 ) ? O(n3/2 ). Since W ? W 0 ,
3/2
we get by definition of the Rademacher complexity
p that Rn2 (W) = O(n ) as well. By
2
3/2
Thm. 3, the regret after n rounds is O(n + n ln(n/?)) with probability at least 1 ? ?.
Applying Lemma 1, wepget that the cumulative regret at the end of any round T = 1, . . . , n2
is at most O(n3/2 + n ln(n/?)), as required.
This bound becomes non-trivial after n3/2 entries are revealed, which is still a vanishing
proportion of all n2 entries. While the regret
? might seem unusual compared to standard
regret bounds (which usually have rates of 1/ T for general losses), it is a natural outcome
of the non-asymptotic nature of our setting, where T can never be larger than n2 . In fact,
this is the same rate one would obtain in a batch setting, where the entries are drawn from
an arbitrary distribution. Moreover, an assumption such as boundedness of the entries is
required for currently-known guarantees even in a batch setting ?see [21] for details.
Acknowledgments
The first author acknowledges partial support by the PASCAL2 NoE under EC grant FP7216886.
References
[1] K. Sridharan A. Rakhlin and A. Tewari. Online learning: Random averages, combinatorial parameters, and learnability. In NIPS, 2010.
[2] J. Abernethy, P. Bartlett, A. Rakhlin, and A. Tewari. Optimal strategies and minimax
lower bounds for online convex games. In COLT, 2009.
[3] J. Abernethy and M. Warmuth. Repeated games against budgeted adversaries. In
NIPS, 2010.
[4] F. Bach. Consistency of trace-norm minimization. Journal of Machine Learning Research, 9:1019?1048, 2008.
[5] P. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds
and structural results. In COLT, 2001.
[6] S. Ben-David, E. Kushilevitz, and Y. Mansour. Online learning versus offline learning.
Machine Learning, 29(1):45?63, 1997.
[7] S. Ben-David, D. P?
al, and S. Shalev-Shwartz. Agnostic online learning. In COLT, 2009.
[8] A. Blum. Separating distribution-free and mistake-bound learning models over the
boolean domain. SIAM J. Comput., 23(5):990?1000, 1994.
[9] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. Helmbold, R. Schapire, and M. Warmuth.
How to use expert advice. Journal of the ACM, 44(3):427?485, May 1997.
[10] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University
Press, 2006.
[11] T. Chung. Approximate methods for sequential decision making using expert advice.
In COLT, 1994.
?
[12] R. M. Dudley. A Course on Empirical Processes, Ecole
de Probabilit?es de St. Flour,
1982, volume 1097 of Lecture Notes in Mathematics. Springer Verlag, 1984.
[13] R. Foygel, R. Salakhutdinov, O. Shamir, and N. Srebro. Learning with the weighted
trace-norm under arbitrary sampling distributions. In NIPS, 2011.
[14] E. Hazan. The convex optimization approach to regret minimization. In S. Nowozin
S. Sra and S. Wright, editors, Optimization for Machine Learning. MIT Press, To
Appear.
[15] P. Bartlett J. Abernethy, A. Agarwal and A. Rakhlin. A stochastic view of optimal
regret through minimax duality. In COLT, 2009.
[16] S. Kakade and A. Kalai. From batch to transductive online learning. In NIPS, 2005.
8
[17] Y. Koren. Collaborative filtering with temporal dynamics. In KDD, 2009.
[18] J. Lee, B. Recht, R. Salakhutdinov, N. Srebro, and J. Tropp. Practical large-scale
optimization for max-norm regularization. In NIPS, 2010.
[19] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In NIPS, 2007.
[20] R. Salakhutdinov and N. Srebro. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In NIPS, 2010.
[21] O. Shamir and S. Shalev-Shwartz. Collaborative filtering with the trace norm: Learning,
bounding, and transducing. In COLT, 2011.
[22] N. Srebro, J. Rennie, and T. Jaakkola. Maximum-margin matrix factorization. In
NIPS, 2004.
[23] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In COLT, 2005.
9
| 4446 |@word version:2 achievable:1 polynomial:5 norm:23 seems:3 proportion:1 open:3 seek:1 forecaster:42 p0:1 pick:1 incurs:1 boundedness:1 harder:2 contains:1 ecole:1 com:1 surprising:1 written:1 readily:1 must:1 subsequent:1 kdd:1 prohibitive:1 warmuth:2 vanishing:2 core:1 provides:1 simpler:1 ik:1 viable:1 prove:3 combine:1 introduce:1 theoretically:2 expected:3 indeed:4 p1:3 salakhutdinov:4 little:1 becomes:2 provided:1 begin:1 moreover:6 bounded:8 notation:2 underlying:1 agnostic:1 what:3 developed:1 contrasting:1 whatsoever:1 shraibman:1 guarantee:9 noe:1 temporal:1 nutshell:1 runtime:1 exactly:1 universit:1 scaled:2 control:1 grant:1 enjoy:1 appear:3 producing:1 t1:1 infima:1 limit:1 mistake:1 lugosi:1 might:5 factorization:2 range:1 practical:3 acknowledgment:1 practice:2 regret:39 recursive:2 implement:1 procedure:1 probabilit:1 empirical:9 significantly:1 attain:1 convenient:1 matching:1 word:2 get:4 unlabeled:2 close:1 risk:8 context:6 applying:1 equivalent:1 map:1 yt:59 straightforward:3 attention:1 convex:14 wit:2 assigns:1 helmbold:1 kushilevitz:1 haussler:1 deriving:1 shamir:3 today:1 pt:43 play:4 suppose:4 us:2 element:2 expensive:1 particularly:1 jk:1 distributional:2 labeled:1 observed:3 ft:7 role:2 capture:1 worst:1 ordering:1 trade:1 mentioned:1 complexity:10 dynamic:1 depend:3 solving:1 efficiency:1 f2:1 completely:2 learner:8 easily:2 various:1 describe:1 pertaining:1 outcome:21 shalev:2 abernethy:3 whose:4 posed:2 solve:1 supplementary:4 say:1 drawing:2 larger:2 valued:3 rennie:1 unseen:1 think:1 transductive:18 online:48 sequence:11 propose:2 relevant:1 achieve:3 extending:1 rademacher:10 ben:2 tk:1 depending:2 stating:1 measured:2 received:1 eq:1 strong:1 predicted:4 trading:1 implies:7 stochastic:3 vc:5 milano:1 material:4 feeding:1 f1:1 randomization:1 strictly:1 hold:4 considered:1 wright:1 scope:1 predict:5 algorithmic:1 claim:1 mapping:1 major:1 achieves:3 label:3 currently:3 combinatorial:1 tool:1 weighted:3 hope:2 minimization:7 mit:1 gaussian:1 rather:2 kalai:1 jaakkola:1 corollary:1 derived:2 focus:3 exemplifies:1 rank:1 greatly:1 adversarial:1 ept:1 minimizers:2 bt:1 typically:1 going:1 subroutine:1 selects:1 classification:1 colt:7 development:1 constrained:3 special:2 equal:1 construct:1 never:2 sampling:1 future:1 discrepancy:1 simplify:1 employ:1 simultaneously:2 consisting:2 eyt:1 microsoft:2 stationarity:1 mnih:1 flour:1 analyzed:1 partial:1 ohad:1 littlestone:2 re:1 theoretical:2 minimal:2 instance:3 boolean:2 applicability:1 introducing:1 entry:22 subset:1 predictor:3 uniform:1 rounding:6 learnability:4 chooses:5 st:1 recht:1 fundamental:1 randomized:10 siam:1 lee:1 off:2 probabilistic:1 thesis:2 cesa:4 opposed:3 choose:1 possibly:3 expert:15 chung:1 leading:1 reusing:1 supp:1 de:2 rn2:2 summarized:2 view:1 picked:1 apparently:1 sup:3 hazan:1 start:1 complicated:1 collaborative:10 minimize:2 accuracy:1 efficiently:4 fore:1 suffers:1 whenever:2 definition:1 against:2 proof:6 di:1 static:4 treatment:1 knowledge:2 improves:1 carefully:1 dt:3 follow:4 reflected:1 done:1 furthermore:1 working:1 sketch:1 hand:2 tropp:1 lack:1 infimum:3 perhaps:1 usa:1 y2:1 true:1 regularization:2 hence:2 deal:2 attractive:1 round:24 game:7 inferior:1 chaining:1 presenting:1 common:2 pseudocode:1 exponentially:2 volume:1 linking:1 extend:2 discussed:3 refer:1 cambridge:1 outlined:1 consistency:1 mathematics:1 etc:1 nicolo:1 recent:1 italy:1 inf:18 verlag:1 inequality:1 binary:13 vt:4 yi:2 seen:1 care:1 full:1 afterwards:2 reduces:1 d0:2 match:2 england:1 bach:1 prediction:33 variant:1 involving:1 essentially:1 expectation:2 iteration:2 kernel:1 agarwal:1 receive:3 remarkably:1 interval:2 crucial:1 probably:1 sridharan:1 seem:1 structural:1 backwards:1 revealed:4 reduce:1 idea:4 whether:2 expression:3 bartlett:3 reuse:1 forecasting:3 akin:1 suffer:3 repeatedly:1 remark:7 useful:2 tewari:2 schapire:1 revisit:1 per:4 write:1 key:1 nevertheless:2 blum:1 drawn:1 changing:1 clarity:1 budgeted:1 utilize:1 subgradient:2 fraction:2 year:1 run:4 compete:1 almost:1 reader:1 reasonable:1 draw:1 decision:1 appendix:4 bound:17 played:3 koren:1 nonnegative:1 n3:5 sake:1 aspect:1 argument:2 subgradients:2 performing:1 according:1 combination:2 belonging:1 smaller:2 kakade:1 making:2 erm:7 taken:4 computationally:10 ln:7 remains:1 previously:1 turn:2 discus:1 fail:1 foygel:1 know:2 letting:1 end:6 unusual:1 apply:5 generic:1 dudley:2 batch:11 original:1 calculating:2 unchanged:1 question:3 quantity:2 strategy:6 rt:15 separating:1 considers:1 trivial:6 studi:1 fresh:1 assuming:1 index:3 relationship:2 fe:1 statement:3 trace:15 design:1 implementation:1 zt:8 unknown:1 perform:1 bianchi:4 upper:2 finite:1 descent:4 immediate:1 extended:1 looking:1 y1:11 mansour:1 arbitrary:5 thm:6 introduced:1 david:2 pair:1 required:3 specified:1 namely:1 connection:1 z1:4 learned:1 nip:8 able:1 adversary:13 below:1 usually:1 azuma:1 max:4 pascal2:1 natural:1 regularized:2 mn:5 minimax:18 transducing:1 improve:1 ohadsh:1 irrespective:2 concludes:1 acknowledges:1 understanding:1 literature:1 nicol:1 asymptotic:2 freund:1 loss:32 dsi:1 fully:1 permutation:3 lecture:1 filtering:10 proven:1 versus:1 srebro:5 sufficient:2 editor:1 nowozin:1 course:1 last:1 free:3 offline:1 weaker:1 wide:1 taking:1 absolute:5 benefit:1 dimension:7 world:1 cumulative:4 computes:2 doesn:1 author:1 made:1 far:1 ec:1 approximate:1 emphasize:1 reveals:3 conclude:1 assumed:1 leader:4 degli:1 shwartz:2 reality:1 as1:1 nature:2 sra:1 necessarily:1 domain:2 significance:1 main:5 bounding:1 playout:3 n2:5 fair:1 repeated:1 x1:3 advice:5 cient:1 martingale:2 precision:2 fails:1 explicit:6 comput:1 lie:1 theorem:10 specific:3 xt:5 jt:3 showing:1 r2:20 explored:1 rakhlin:3 intractable:1 exists:2 consist:1 mendelson:1 sequential:2 mirror:4 horizon:7 margin:1 easier:1 mf:10 suited:1 simply:2 scalar:1 springer:1 corresponds:1 satisfies:5 acm:1 goal:3 viewed:1 exposition:1 lipschitz:5 hard:1 change:1 determined:1 uniformly:1 unimi:1 averaging:2 lemma:5 duality:1 e:1 meaningful:1 formally:1 support:1 |
3,807 | 4,447 | Exploiting spatial overlap to efficiently compute
appearance distances between image windows
Bogdan Alexe
ETH Zurich
Viviana Petrescu
ETH Zurich
Vittorio Ferrari
ETH Zurich
Abstract
We present a computationally efficient technique to compute the distance of highdimensional appearance descriptor vectors between image windows. The method
exploits the relation between appearance distance and spatial overlap. We derive
an upper bound on appearance distance given the spatial overlap of two windows
in an image, and use it to bound the distances of many pairs between two images.
We propose algorithms that build on these basic operations to efficiently solve
tasks relevant to many computer vision applications, such as finding all pairs of
windows between two images with distance smaller than a threshold, or finding
the single pair with the smallest distance. In experiments on the PASCAL VOC 07
dataset, our algorithms accurately solve these problems while greatly reducing the
number of appearance distances computed, and achieve larger speedups than approximate nearest neighbour algorithms based on trees [18] and on hashing [21].
For example, our algorithm finds the most similar pair of windows between two
images while computing only 1% of all distances on average.
1
Introduction
Computing the appearance distance between two windows is a fundamental operation in a wide
variety of computer vision techniques. Algorithms for weakly supervised learning of object
classes [7, 11, 16] typically compare large sets of windows between images trying to find recurring
patterns of appearance. Sliding-window object detectors based on kernel SVMs [13, 24] compute
appearance distances between the support vectors and a large number of windows in the test image.
In human pose estimation, [22] computes the color histogram dissimilarity between many candidate
windows for lower and upper arms. In image retrieval the user can search a large image database for
a query object specified by an image window [20]. Finally, many tracking algorithms [4, 5] compare
a window around the target object in the current frame to all windows in a surrounding region of the
next frame.
In most cases one is not interested in computing the distance between all pairs of windows from two
sets, but in a small subset of low distances, such as all pairs below a given threshold, or the single
best pair. Because of this, computer vision researchers often rely on efficient nearest neighbour
algorithms [2, 6, 10, 17, 18, 21]. Exact nearest neighbour algorithms organize the appearance
descriptors into trees which can be efficiently searched [17]. However, these methods work well only
for descriptors of small dimensionality n (typically n < 20), and their speedup vanishes for larger
n (e.g. the popular GIST descriptor [19] has n = 960). Locality sensitive hashing (LSH [2, 10, 21])
techniques hash the descriptors into bins, so that similar descritors are mapped to the same bins with
high probability. LSH is typically used for efficiently finding approximate nearest neighbours in
high dimensions [2, 6].
All the above methods consider windows only as points in appearance space. However, windows
exist also as points in the geometric space defined as their 4D coordinates in the image they lie in. In
this geometric space, a natural distance between two windows is their spatial overlap (fig. 1). In this
paper we propose to take advantage of an important relation between the geometric and appearance
spaces: the apparance distance between two windows decreases as their spatial overlap increases.
We derive an upper bound on the appearance distance between two windows in the same image,
1
Fig. 1: Relation between spatial overlap and appearance distance. Windows w1 , w2 in an image I are
embedded in geometric space and in appearance space. All windows overlapping more than r with w1 are at
most at distance B(r) in appearance space. The bound B(r) decreases as overlap increases (i.e. r decreases).
given their spatial overlap (sec. 2). We then use this bound in conjuction with the triangle inequality
to bound the appearance distances of many pairs of windows between two images, given the distance
of just one pair. Building on these basic operations, we design algorithms to efficiently find all pairs
with distance smaller than a threshold (sec. 3) and to find the single pair with the smallest distance
(sec. 4).
The techniques we propose reduce computation by minimizing the number of times appearance
distances are computed. They are complementary to methods for reducing the cost of computing
one distance, such as dimensionality reduction [15] or Hamming embeddings [14, 23].
We experimentally demonstrate in sec. 5 that the proposed algorithms accurately solve the above
problems while greatly reducing the number of appearance distances computed. We compare to
approximate nearest neighbour algorithms based on trees [18], as well as on the recent LSH technique [21]. The results show our techniques outperform them in the setting we consider, where the
datapoints are embedded in a space with additional overlap structure.
2
Relation between spatial overlap and appearance distance
Windows w in an image I are emdebbed in two spaces at the same time (fig. 1). In geometric
space, w is represented by its 4 spatial coordinates (e.g. x, y center, width, height). The distance
1 \w2 |
between two windows is defined based on their spatial overlap o(w1 , w2 ) = |w
|w1 [w2 | 2 [0, 1],
where \ denotes the area of the intersection and [ the area of the union. In appearance space, w
is represented by a high dimensional vector describing the pixel pattern inside it, as computed by
a function fapp (w) : I ! Rn (e.g. the GIST descriptor has n = 960 dimensions). In appearance
space, two windows are compared using a distance d(fapp (w1 ), fapp (w2 )).
Two overlapping windows w1 , w2 in an image I share the pixels contained in their intersection
(fig. 1). The spatial overlap of the two windows correlates with the proportion of common pixels
input to fapp when computing the descriptor for each window. In general, fapp varies smoothly with
the geometry of w, so that windows of similar geometry are close in appearance space. Consequently, the spatial overlap o and appearance distance d are related. In this paper we exploit this
relation to derive an upper bound B(o(w1 , w2 )) on the appearance distance between two overlapping
windows.
We present here the general form of the bound B, its main properties and explain why it is useful. In
subsections 2.1 and 2.2 we derive the actual bound itself. To simplify the notation we use d(w1 , w2 )
to denote the appearance distance d(fapp (w1 ), fapp (w2 )). We refer to it simply as distance and we
say overlap for spatial overlap. The upper bound B is a function of the overlap o(w1 , w2 ), and has
the following property
d(w1 , w2 ) ? B(o(w1 , w2 ))
8w1 , w2
(1)
o2
(2)
Moreover, B is a monotonic decreasing function
B(o1 ) ? B(o2 )
2
8o1
(a)
(b)
(c)
Fig. 2: Triangle inequality in appearance space. The triangle inequality (4) holds for any three
points fapp (w1 ), fapp (w2 ) and fapp (w3 ) in appearance space. (a) General case; (b) Lower bound case:
|d(w1 , w2 ) d(w2 , w3 )| = d(w1 , w3 ); (c) Upper bound case: d(w1 , w3 ) = d(w1 , w2 ) + d(w2 , w3 ).
This property means B continuously decreases as overlap increases. Therefore, all pairs of windows
within an overlap radius r (i.e. o(w1 , w2 ) r) have distance below B(r) (fig. 1)
d(w1 , w2 ) ? B(o(w1 , w2 )) ? B(r)
8w1 , w2 , o(w1 , w2 )
r
(3)
As defined above, B bounds the appearance distance between two windows in the same image.
Now we show how it can be used to derive a bound on the distances between windows in two
different images I 1 , I 2 . Given two windows w1 , w2 in I 1 and a window w3 in I 2 , we use the
triangle inequality to derive (fig. 2)
|d(w1 , w2 )
d(w2 , w3 )| ? d(w1 , w3 ) ? d(w1 , w2 ) + d(w2 , w3 )
Using the bound B in eq. (4) we obtain
max(0, d(w2 , w3 )
B(o(w1 , w2 ))) ? d(w1 , w3 ) ? B(o(w1 , w2 )) + d(w2 , w3 )
(4)
(5)
Eq. (5) delivers lower and upper bounds for d(w1 , w3 ) without explicitly computing it (given that
d(w2 , w3 ) and o(w1 , w2 ) are known). These bounds will form the basis of our algorithms for reducing the number of times the appearance distance is computed when solving two classic tasks (sec. 3
and 4).
In the next subsection we estimate B for arbitrary window descriptors (e.g. color histograms, bag of
visual words, GIST [19], HOG [8]) from a set of images (no human annotation required). In subsection 2.2 we derive exact bounds in closed form for histogram descriptors (e.g. color histograms,
bag of visual words [25]).
2.1
Statistical bounds for arbitrary window descriptors
We estimate B? from training data so that eq. (1) holds with probability ?
P ( d(w1 , w2 ) ? B? (o(w1 , w2 )) ) = ?
8w1 , w2
(6)
B? is estimated from a set of M training images I = {I m }. For each image I m we sample N
m
m
windows {wim }, and then compute for all window pairs their overlap om
ij = o(wi , wj ) and distance
m
m
m
m
m
dij = d(wi , wj ). The overall training dataset D is composed of (oij , dij ) for every window pair
m
D = { (om
ij , dij ) | k 2 {1, M } , i, j 2 {1, N }}
(7)
We now quantize the overlap values into 100 bins and estimate B? (o) for each bin o separately. For
m
a bin o, we consider the set Do of all distances dm
ij for which oij is in the bin. We choose B? (o) as
the ?-quantile of D(o) (fig. 3a)
B? (o) = q? (Do )
(8)
m
B1 (o) is the largest distance dm
ij for which oij is in bin o. Fig. 3a shows the binned distanceoverlap pairs and the bound B0.95 for GIST descriptors [19]. The data comes from 100 windows
sampled from more than 1000 images (details in sec. 5). Each column of this matrix is roughly
Gaussian distributed, and its mean continuously decreases with increasing overlap, confirming our
assumptions about the relation between overlap and distance (sec. 2). In particular, note how the
mean distance decrease fastest for 50% to 80% overlap.
3
(a)
(b)
Fig. 3: Estimating B0.95 (o) and omin (?). (a) The estimated B0.95 (o) (white line) for the GIST [19] appearance descriptor. (b) Using B0.95 (o) we derive omin (?).
Given a window w1 and a distance ? we can use B? to find windows w2 overlapping with w1
that are at most distance ? from w1 . This will be used extensively by our algorithms presented in
secs. 3 and 4. From B? we can derive what is the smallest overlap omin (?) so that all pairs of
windows overlapping more than omin (?) have distance smaller than ? (with probability more than
?). Formally
P ( d(w1 , w2 ) ? ? )
?
8w1 , w2 , o(w1 , w2 )
omin (?)
(9)
and omin (?) is defined as the smallest overlap o for which the bound is smaller than ? (fig. 3b)
omin (?) = min{o | B? (o) ? ?}
2.2
(10)
Exact bounds for histogram descriptors
The statistical bounds of the previous subsection can be estimated from images for any appearance
descriptor. In contrast, in this subsection we derive exact bounds in closed form for histogram descriptors (e.g. color histograms, bag of visual words [25]). Our derivation applies to L1 -normalized
histograms and the 2 distance. For simplicity of presentation, we assume every pixel contributes
one feature to the histogram of the window (as in color histograms). The derivation is very similar
for features computed on another regular grid (e.g. dense SURF bag-of-words [11]). We present
here the main idea behind the bound and give the full derivation in the supplementary material [1].
The upper bound B for two windows w1 and w2 corresponds to the limit case where the three
regions w1 \ w2 , w1 \ w2 and w2 \ w1 contain three disjoint sets of colors (or visual word in
general). Therefore, the upper bound B is
?
?
|w1 \ w2 | |w2 \ w1 |
+
+ |w1 \ w2 | ?
B(w1 , w2 ) =
|w1 |
|w2 |
1
|w1 |
1
|w1 |
1
|w2 |
+ |w12 |
Expressing the terms in (11) based on the windows overlap o = o(w1 , w2 ) =
closed form for the upper bound B that depends only on o
o
B(w1 , w2 ) = B(o(w1 , w2 )) = B(o) = 2 4 ?
o+1
2
|w1 \w2 |
|w1 [w2 | ,
(11)
we obtain a
(12)
In practice, this exact bound is typically much looser than its corresponding statistical bound learned
from data (sec. 2.1). Therefore, we use the statistical bound for the experiments in sec. 5.
3
Efficiently computing all window pairs with distance smaller than ?
In this section we present an algorithm to efficiently find all pairs of windows with distance smaller
than a threshold ? between two images I 1 , I 2 . Formally, given an input set of windows W 1 = {wi1 }
in image I 1 and a set W 2 = {wj2 } in image I 2 , the algorithm should return the set of pairs P? =
{ (wi1 , wj2 ) | d(wi1 , wj2 ) ? ? }.
Algorithm overview. Algorithm 1 summarizes our technique. Block 1 randomly samples a small
set of seed pairs, for which it explicly computes distances. The core of the algorithm (Block 3)
explores pairs overlapping with a seed, looking for all appearance distances smaller than ?. When
4
Algorithm 1 Efficiently computing all distances smaller than ?
Input: windows W m = {wim }, threshold ?, lookup table omin , number of initial samples F
Output: set P? of all pairs p with d(p) ? ?
1. Compute seed pairs PF
(a) sample F random pairs pij = (wi1 , wj2 ) from P = W 1 ? W 2 , giving PF
(b) compute dij = d(wi1 , wj2 ), 8pij 2 PF
2. Determine a sequence S of all pairs from P (gives schedule of block 3 below)
(a) sort the seed pairs in PF in order of decreasing distance
(b) set S(1 : F ) = PF
(c) fill S((F + 1) : end) with random pairs from P \ PF
3. For pc = S(1 : end) (explore the pairs in the S order)
(a) compute d(pc )
(b) if d(pc ) ? ?
i. let r = omin (? d(pc ))
ii. let N = overlap neighborhood(pc , r)
iii. for all pairs p 2 N : compute d(p)
iv. update P?
P? [ {p 2 N | d(p) ? ?}
(c) else
i. let r = omin (d(pc ) ?)
ii. let N = overlap neighborhood(pc , r)
iii. discard all pairs in N from S: S
S\N
overlap neighborhood
Input: pair pij = (wi1 , wj2 ), overlap radius r
Output: overlap neighborhood N of pij
N = { (wi1 , wv2 ) | o(wj2 , wv2 )
r } [ {(wu1 , wj2 ) | o(wi1 , wu1 )
r}
compute
Input: pair pij
Output: If d(wi1 , wj2 ) was never computed before, then compute it and store it in a table D. If
d(wi1 , wj2 ) is already in D, then directly return it.
exploring a seed, the algorithm can decide to discard many pairs overlapping with it, as the bound
predicts that their distance cannot be lower than ?. This causes the computational saving (step 3.c).
Before starting Block 3, Block 2 establishes the sequence in which to explore the seeds, i.e. in order
of decreasing distance. The remaining pairs are appended in random order afterwards.
Algorithm core. Block 3 takes one of two actions based on the distance of the pair pc currently
being explored. If d(pc ) ? ?, then all pairs in the overlap neighborhood N of pc have distance
smaller than ?. This overlap neighborhood has a radius r = omin (? d(pc )) predicted by the
bound lookup table omin (fig. 4a). Therefore, Block 3 computes the distance of all pairs in N
(step 3.b). Instead, if d(pc ) > ?, Block 3 determines the radius r = omin (d(pc ) ?) of the overlap
neighborhood containing pairs with distance greater than ?, and then discards all pairs in it (step 3.c).
Overlap neighborhood. The overlap neighborhood of a pair pij = (wi1 , wj2 ) with radius r contains all pairs (wi1 , wv2 ) such that o(wj2 , wv2 )
r, and all pairs (wu1 , wj2 ) such that o(wi1 , wu1 )
r
(fig. 4a).
4
Efficiently computing the single window pair with the smallest distance
We give an algorithm to efficiently find the single pair of windows with the smallest appearance
distance between two images. Given as input the two sets of windows W 1 , W 2 , the algorithm
should return the pair p? = (wi1? , wj2? ) with the smallest distance: d(wi1? , wj2? ) = minij d(wi1 , wj2 ).
5
(a)
(b)
Fig. 4: Overlap neighborhoods. (a) The overlap neighborhood of radius r of a pair (wi1 , wj2 ) contains all
blue pairs. (b) The joint overlap neighborhood of radius s of a pair (wi1 , wj2 ) contains all blue and green pairs.
Algorithm overview. Algorithm 2 is analog to Algorithm 1. Block 1 computes distances for the
seed pairs and it selectes the pair with the smallest distance as initial approximation to p? . Block 3
explores pairs overlapping with a seed, looking for a distance smaller than d(p? ). When exploring a
seed, the algorithm can decide to discard many pairs overlapping with it, as the bound predicts they
cannot be better than p? . Block 2 organizes the seeds in order of increasing distance. In this way,
the algorithm can rapidly refine p? towards smaller and smaller values. This is useful because in
step 3.c, the amount of discarded pairs is greater as d(p? ) gets smaller. Therefore, this seed ordering
maximises the number of discarded pairs (i.e. minimizes the number of distances computed).
Algorithm core. Block 3 takes one of two actions based on d(pc ). If d(pc ) ? d(p? ) + B? (s),
then there might be a better pair than d(p? ) within radius s in the joint overlap neighborhood of
pc . Therefore, the algorithm computes the distance of all pairs in this neighborhood (step 3.b). The
radius s is an input parameter. Instead, if d(pc ) > d(p? ) + B? (s), the algorithm determines the
radius r = omin (d(pc ) d(p? )) of the overlap neighborhood that contains only pairs with distance
greater than d(p? ), and then discards all pairs in it (step 3.c).
Joint overlap neighborhood. The joint overlap neighborhood of a pair pij = (wi1 , wj2 ) with
radius s contains all pairs (wu1 , wv2 ) such that o(wi1 , wu1 ) s and o(wj2 , wv2 ) s.
5
Experiments and conclusions
We present experiments on a test set composed of 1000 image pairs from the PASCAL VOC 07
dataset [12], randomly sampled under the constraint that two images in a pair contain at least one
object of the same class (out of 6 classes: aeroplane, bicycle, bus, boat, horse, motorbike). This
setting is relevant for various applications, such as object detection [13, 24], and ensures a balanced
distribution of appearance distances in each image pair (some pairs of windows will have a low
distance while others high distances). We experiment with three appearance descriptors: GIST [19]
(960D), color histograms (CHIST, 4000D), and bag-of-words [11, 25] on the dense SURF descriptor [3] (BOW, 2000D). As appearance distances we use the Euclidean for GIST, and 2 for CHIST
and SURF BOW. The bound tables B? for each descriptor were estimated beforehand from a separate set of 1300 images of other classes (sec. 2.1).
Task 1: all pairs of windows with distance smaller than ?. The task is to find all pairs of windows with distance smaller than a user-defined threshold ? between two images I 1 , I 2 (sec. 3). This
task occurs in weakly supervised learning of object classes [7, 11, 16], where algorithms search for
recurring patterns over training images containing thousands of overlapping windows, and in human
pose estimation [22], which compares many overlapping candidate body part locations.
We random sample 3000 windows in each image (|W 1 | = |W 2 | = 3000) and set ? so that 10%
of all distances are below it. This makes the task meaningful for any image pair, regardless of the
range of distances it contains. For each image pair we quantify performance with two measures: (i)
cost: the number of P
computed distances divided by the total number of window pairs (9 millions);
p2P? (? d(p))
(i) accuracy: P
, where P? is the set of window pairs returned by the algo{p2W 1 ?W 2 |d(p)??} (? d(p))
rithm, and the denominator sums over all distances truly below ?. The lowest possible cost while
still achieving 100% accuracy is 10%.
We compare to LSH [2, 6, 10] using [21] as a hash function. It maps descriptors to binary strings,
such that the Hamming distance between two strings is related to the value of a Gaussian kernel
between the original descriptors [21]. As recommended in [6, 10], we generate T separate (random)
encodings and build T hash tables, each with 2C bins, where C is the number of bits in the encoding.
6
Algorithm 2 Efficiently computing the smallest distance
Input: windows W m = {wim }, lookup table omin , search radius s, number of initial samples F
Output: pair p? with the smallest distance
1. Compute seed pairs PF (as Block 1 of Algorithm 1) and
estimate current best pair: p? = arg minpij 2PF dij
2. Determine a sequence S of all pairs (as Block 2 of Algorithm 1)
3. For pc = S(1 : end) (explore the pairs in the S order)
(a) compute d(pc )
(b) if d(pc ) ? d(p? ) + B? (s)
i. let N = joint overlap neighborhood(pc , s)
ii. for all pairs p 2 N : compute d(p)
iii. update p?
arg min {{d(p? )} [ {d(p) | p 2 N }}
(c) else
i. let r = omin (d(pc ) d(p? ))
ii. let N = overlap neighborhood(pc , r)
iii. discard all pairs in N from S: S
S\N
joint overlap neighborhood
Input pair pij = (wi1 , wj2 ), overlap radius s
Output: joint overlap neighborhood N of pij
N = { (wu1 , wv2 ) | o(wi1 , wu1 )
s, o(wj2 , wv2 )
s}
To perform Task 1, we loop over each table t and do: (H1) hash all wj2 2 W 2 into table t; (H2) for
each wi1 2 W 1 do: (H2.1) hash wi1 into its bin b1t,i ; (H2.2) compute all distances d in the original
space between wi1 and all windows wj2 2 b1t,i (unless already computed when inspecting a previous
table); (H3) return all computed d(wi1 , wj2 ) ? ?.
We also compare to approximate nearest-neighbors based on kd-trees, using the ANN library [18].
To perform Task 1, we do: (A1) for each wi1 2 W 1 do: (A1.1) compute the ?-NN between wi1
and all windows wj2 2 W 2 and return them all. The notion of cost above is not defined for ANN
methods based on trees. Instead, we measure wall clock runtime. Instead, we report as cost the ratio
of the runtime of approximate NN over the runtime of exact NN (also computed using the ANN
library [18]). This gives a meaningful indication of speedup, which can be compared to the cost we
report for our method and LSH. As the ANN library supports only the Euclidean distance, we report
results only for GIST.
The results table reports cost and accuracy averaged over the test set. Our method from sec. 3
performs very well for all three descriptors. On average it achieves 98% accuracy at 16% cost. This
is a considerable speedup over exhaustive search, as it means only 7% of the 90% distances greater
than ? have been computed. The behavior of LSH depends on T and C. The higher the T , the
higher the accuracy, but also the cost (because there are more collisions; the same holds for lower
C). To compare fairly, we evaluate LSH over T 2 {1, 20} and C 2 {2, 30} and report results for
the T, C that deliver the closest accuracy to our method. As the table shows, on average over the
three descriptors, for same accuracy LSH has cost 92%, substantially worse than our method. The
behavior of ANN depends on the degree of approximation which we set so as to get accuracy closest
to our method. At 92% accuracy, ANN has 72% of the runtime of exact NN. This shows that, if high
accuracy is desired, ANN offers only a modest speedup (compared to our 18% cost for GIST).
Task 2: all windows closer than ? to a query. This is a special case of Task 1, where W 1 contains
just one window. Hence, this becomes a ?-nearest-neighbours task where W 1 acts as a query and
W 2 as the retrieval database. This task occurs in many applications, e.g. object detectors based
on kernel SVMs compare a support vector (query) to a large set of overlapping windows in the test
image [13, 24]. As this is expensive, many detectors resort to linear kernels [9]. Our algorithms
7
GIST + Euclidean distance
method
cost
accuracy
our
18.0%
97.3%
LSH
86.2%
95.4%
ANN
71.8%
91.9%
method
our
LSH
ANN
cost
30.2%
73.4%
72.6%
method
our
LSH
ANN
cost
2.3%
16.4%
58.6%
accuracy
87.1%
83.5%
87.7%
ratio
1.02
1.03
1.01
rank
1.39
2.72
1.48
Task 1
CHIST + 2 distance
method
cost
accuracy
our
15.7%
97.7%
LSH
93.7%
97.2%
ANN
Task 2
method
cost
accuracy
our
30.3%
96.2%
LSH
96.9%
95.1%
ANN
Task 3
method
cost
ratio rank
our
0.4%
1.01 1.12
LSH
37.5% 1.02 33.5
ANN
-
SURF BOW +
method
cost
our
15.2%
LSH
96.8%
ANN
method
our
LSH
ANN
cost
28.6%
88.7%
-
method
our
LSH
ANN
cost
0.7%
46.5%
-
2
distance
accuracy
98.5%
98.5%
accuracy
94.0%
92.1%
-
ratio
1.01
1.01
-
rank
1.19
9.62
-
offer the option to use more complex kernels while retaining a practical speed. Other applications
include tracking in video [4, 5] and image retrieval [20] (see beginning of sec. 1).
As the table shows, our method is somewhat less efficient than on Task 1. This makes sense, as it
can only exploit overlap structure in one of the two input sets. Yet, for a similar accuracy it offers
greater speedup than LSH and ANN.
Task 3: single pair of windows with smallest distance. The task is to find the single pair of
windows with the smallest distance between I 1 and I 2 , out of 3000 windows in each image (sec. 4),
and has similar applications as Task 1.
We quantify performance with three measures: (i) cost: as in all other tasks. (ii) distance ratio: the
ratio between the smallest distance returned by the algorithm and the true smallest distance. The
best possible value is 1, and higher values are worse; (iii) rank: the rank of the returned distance
among all 9 million.
To perform Task 3 with LSH, we simply modify step (H3) of the procedure given for Task 1 to:
return the smallest distance among all those computed. To perform Task 3 with ANN we replace
step (A1.1) with: compute the NN of wi1 in W 2 . At the end of loop (A1) return the smallest distance
among all those computed.
As the table shows, on average over the three descriptors, our method from sec. 4 achieves a distance
ratio of 1.01 at 1.1% cost, which is almost a 100? faster than exhaustive search. The average rank of
the returned distance is 1.25 out of 9 millions, which is almost a perfect result. When compared at a
similar distance ratio, our method is considerably more efficient than LSH and ANN. LSH computes
33.3% of all distances, while ANN brings only a speedup of factor 2 over exact NN.
Runtime considerations. While we have measured only the number of computed appearance distances, our algorithms also compute spatial overlaps. Crucially, spatial overlaps are computed in the
4D geometric space, compared to 1000+ dimensions for the appearance space. Therefore, computing spatial overlaps has negligible impact on the total runtime of the algorithms. In practice, when
using 5000 windows per image with 4000D dense SURF BOW descriptors, the total runtime of our
algorithms is 71s for Task 1 or 16s for Task 3, compared to 335s for exhaustive search. Importantly, the cost of computing the descriptors is small compared to the cost of evaluating distances,
as it is roughly linear in the number of windows and can be implemented very rapidly. In practice,
computing dense SURF BOW for 5000 windows in two images takes 5 seconds.
Conclusions. We have proposed efficient algorithms for computing distances of appearance descriptors between two sets of image windows, by taking advantage of the overlap structure in the
sets. Our experiments demonstrate that these algorithms greatly reduce the number of appearance
distances computed when solving several tasks relevant to computer vision and outperform LSH
and ANN for these tasks. Our algorithms could be useful in various applications. For example,
improving the spatial accuracy of weakly supervised learners [7, 11] by using thousands of windows per image, using more complex kernels and detecting more classes in kernel SVM object
detectors [13, 24], and enabling image retrieval systems to search at the window level with any descriptor, rather than returning entire images or be constrained to bag-of-words descriptors [20]. To
encourage these applications, we release our source code at http://www.vision.ee.ethz.ch/?calvin.
8
References
[1] B. Alexe, V. Petrescu, and V. Ferrari. Exploiting spatial overlap to efficiently compute appearance distances between image windows - supplementary material. In NIPS, 2011. Also
available at http://www.vision.ee.ethz.ch/ calvin/publications.html.
[2] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in
high dimensions. In Communications of the ACM, 2008.
[3] H. Bay, A. Ess, T. Tuytelaars, and L. van Gool. SURF: Speeded up robust features. CVIU,
110(3):346?359, 2008.
[4] C. Bibby and I. Reid. Robust real-time visual tracking using pixel-wise posteriors. In ECCV,
2008.
[5] S. Birchfield. Elliptical head tracking using intensity gradients and color histograms. In CVPR,
1998.
[6] O. Chum, J. Philbin, M. Isard, and A. Zisserman. Scalable near identical image and shot
detection. In CIVR, 2007.
[7] O. Chum and A. Zisserman. An exemplar model for learning object classes. In CVPR, 2007.
[8] N. Dalal and B. Triggs. Histogram of Oriented Gradients for Human Detection. In CVPR,
volume 2, pages 886?893, 2005.
[9] N. Dalal and B. Triggs. Histogram of oriented gradients for human detection. In CVPR, 2005.
[10] M. Datar, N. Immorlica, P. Indyk, and V. Mirrokni. Locality-sensitive hashing scheme based
on p-stable distributions. In SCG, 2004.
[11] T. Deselaers, B. Alexe, and V. Ferrari. Localizing objects while learning their appearance. In
ECCV, 2010.
[12] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The PASCAL Visual
Object Classes Challenge 2007 Results, 2007.
[13] H. Harzallah, F. Jurie, and C. Schmid. Combining efficient object localization and image
classification. In ICCV, 2009.
[14] H. Jegou, M. Douze, and C. Schmid. Hamming embedding and weak geometric consistency
for large-scale image search. In ECCV, 2008.
[15] Y. Ke and R. Sukthankar. Pca-sift: A more distinctive representation for local image descriptors. In CVPR, 2004.
[16] G. Kim and A. Torralba. Unsupervised detection of regions of interest using iterative link
analysis. In NIPS, 2009.
[17] N. Kumar, L. Zhang, and S. Nayar. What is a good nearest neighbors algorithm for finding
similar patches in images? In ECCV, 2008.
[18] D. M. Mount and S. Arya. Ann: A library for approximate nearest neighbor searching, August
2006.
[19] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the
spatial envelope. IJCV, 42(3):145?175, 2001.
[20] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. In CVPR, 2007.
[21] M. Raginski and S. Lazebnik. Locality sensitive binary codes from shift-invariant kernels. In
NIPS, 2009.
[22] B. Sapp, A. Toshev, and B. Taskar. Cascaded models for articulated pose estimation. In ECCV,
2010.
[23] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition.
In CVPR, 2008.
[24] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection.
In ICCV, 2009.
[25] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of texture and object categories: a comprehensive study. IJCV, 2007.
9
| 4447 |@word dalal:2 proportion:1 everingham:1 triggs:2 crucially:1 scg:1 shot:1 reduction:1 initial:3 contains:7 wj2:26 o2:2 current:2 elliptical:1 yet:1 confirming:1 shape:1 gist:10 update:2 hash:5 isard:2 beginning:1 es:1 core:3 detecting:1 location:1 zhang:2 height:1 ijcv:2 inside:1 behavior:2 roughly:2 jegou:1 voc:2 decreasing:3 actual:1 window:77 pf:8 increasing:2 becomes:1 estimating:1 notation:1 moreover:1 lowest:1 what:2 minimizes:1 string:2 substantially:1 finding:4 every:2 act:1 runtime:7 returning:1 organize:1 reid:1 before:2 negligible:1 local:2 modify:1 limit:1 encoding:2 mount:1 datar:1 marszalek:1 might:1 fastest:1 range:1 speeded:1 averaged:1 jurie:1 practical:1 union:1 practice:3 block:14 procedure:1 b1t:2 area:2 eth:3 vedaldi:1 matching:1 word:7 regular:1 get:2 cannot:2 close:1 sukthankar:1 www:2 vittorio:1 map:1 center:1 williams:1 regardless:1 starting:1 ke:1 simplicity:1 importantly:1 fill:1 varma:1 datapoints:1 classic:1 embedding:1 ferrari:3 coordinate:2 notion:1 searching:1 target:1 user:2 exact:8 expensive:1 recognition:1 predicts:2 database:3 taskar:1 thousand:2 region:3 wj:2 ensures:1 ordering:1 decrease:6 balanced:1 vanishes:1 weakly:3 solving:2 algo:1 deliver:1 localization:1 distinctive:1 learner:1 basis:1 triangle:4 joint:7 represented:2 various:2 surrounding:1 derivation:3 articulated:1 fast:1 query:4 horse:1 neighborhood:21 exhaustive:3 larger:2 solve:3 supplementary:2 say:1 cvpr:7 tuytelaars:1 itself:1 indyk:2 advantage:2 sequence:3 indication:1 propose:3 douze:1 relevant:3 loop:2 combining:1 rapidly:2 bow:5 holistic:1 achieve:1 exploiting:2 perfect:1 object:16 bogdan:1 derive:10 pose:3 exemplar:1 ij:4 measured:1 nearest:10 h3:2 b0:4 eq:3 implemented:1 predicted:1 come:1 quantify:2 radius:13 human:5 material:2 bin:9 civr:1 wall:1 inspecting:1 exploring:2 hold:3 around:1 seed:12 alexe:3 bicycle:1 achieves:2 torralba:3 smallest:16 estimation:3 wi1:29 bag:6 currently:1 wim:3 sensitive:3 conjuction:1 largest:1 establishes:1 gaussian:2 rather:1 publication:1 deselaers:1 release:1 rank:6 greatly:3 contrast:1 kim:1 sense:1 nn:6 typically:4 entire:1 relation:6 interested:1 pixel:5 overall:1 arg:2 among:3 pascal:3 html:1 classification:2 retaining:1 spatial:20 special:1 fairly:1 constrained:1 never:1 saving:1 identical:1 unsupervised:1 others:1 report:5 simplify:1 randomly:2 neighbour:6 composed:2 omin:16 minij:1 comprehensive:1 oriented:2 geometry:2 detection:6 interest:1 truly:1 pc:24 behind:1 beforehand:1 closer:1 encourage:1 modest:1 unless:1 tree:5 iv:1 euclidean:3 desired:1 chist:3 column:1 modeling:1 bibby:1 localizing:1 cost:24 subset:1 dij:5 varies:1 considerably:1 fundamental:1 explores:2 continuously:2 w1:57 containing:2 choose:1 worse:2 resort:1 return:7 lookup:3 sec:16 explicitly:1 depends:3 h1:1 philbin:2 closed:3 sort:1 option:1 annotation:1 p2p:1 om:2 appended:1 gulshan:1 accuracy:18 descriptor:29 efficiently:12 weak:1 accurately:2 researcher:1 detector:4 explain:1 dm:2 hamming:3 sampled:2 dataset:3 popular:1 color:8 subsection:5 dimensionality:2 sapp:1 schedule:1 hashing:4 higher:3 supervised:3 zisserman:5 wei:1 just:2 clock:1 overlapping:12 brings:1 building:1 normalized:1 contain:2 true:1 hence:1 white:1 width:1 trying:1 demonstrate:2 performs:1 delivers:1 l1:1 image:53 wise:1 consideration:1 lazebnik:2 common:1 overview:2 volume:1 million:3 analog:1 refer:1 expressing:1 grid:1 consistency:1 lsh:22 stable:1 closest:2 posterior:1 recent:1 discard:6 store:1 inequality:4 binary:2 additional:1 greater:5 somewhat:1 determine:2 recommended:1 ii:5 sliding:1 full:1 afterwards:1 multiple:1 faster:1 offer:3 retrieval:5 divided:1 a1:4 impact:1 scalable:1 basic:2 oliva:1 denominator:1 vision:6 calvin:2 histogram:14 kernel:10 separately:1 winn:1 else:2 source:1 w2:56 envelope:1 ee:2 near:2 iii:5 embeddings:1 variety:1 w3:14 wu1:8 reduce:2 idea:1 shift:1 pca:1 aeroplane:1 returned:4 cause:1 action:2 useful:3 collision:1 amount:1 extensively:1 svms:2 category:1 generate:1 http:2 outperform:2 exist:1 chum:3 estimated:4 disjoint:1 per:2 blue:2 threshold:6 achieving:1 sum:1 almost:2 decide:2 looser:1 patch:1 w12:1 summarizes:1 bit:1 bound:35 refine:1 binned:1 constraint:1 scene:1 toshev:1 speed:1 min:2 kumar:1 speedup:7 kd:1 smaller:15 wi:2 iccv:2 invariant:1 computationally:1 zurich:3 bus:1 describing:1 end:4 available:1 operation:3 motorbike:1 original:2 denotes:1 remaining:1 include:1 exploit:3 giving:1 quantile:1 build:2 already:2 occurs:2 mirrokni:1 gradient:3 distance:103 separate:2 mapped:1 link:1 code:3 o1:2 ratio:8 minimizing:1 birchfield:1 hog:1 design:1 perform:4 maximises:1 upper:10 discarded:2 arya:1 enabling:1 looking:2 communication:1 head:1 frame:2 rn:1 arbitrary:2 august:1 intensity:1 pair:81 required:1 specified:1 sivic:1 learned:1 nip:3 recurring:2 below:5 pattern:3 challenge:1 max:1 green:1 video:1 gool:2 overlap:54 natural:1 rely:1 oij:3 cascaded:1 boat:1 arm:1 scheme:1 library:4 schmid:3 geometric:7 embedded:2 h2:3 degree:1 pij:9 share:1 eccv:5 fapp:10 wide:1 neighbor:4 taking:1 distributed:1 van:2 dimension:4 vocabulary:1 evaluating:1 computes:6 harzallah:1 correlate:1 approximate:7 b1:1 fergus:1 search:8 iterative:1 bay:1 why:1 table:13 robust:2 contributes:1 improving:1 quantize:1 complex:2 surf:7 main:2 dense:4 complementary:1 body:1 fig:14 rithm:1 candidate:2 lie:1 sift:1 explored:1 svm:1 andoni:1 texture:1 dissimilarity:1 cviu:1 locality:3 smoothly:1 intersection:2 simply:2 appearance:42 explore:3 visual:6 contained:1 tracking:4 monotonic:1 applies:1 wv2:8 corresponds:1 ch:2 determines:2 acm:1 presentation:1 consequently:1 ann:22 towards:1 replace:1 considerable:1 experimentally:1 reducing:4 total:3 meaningful:2 organizes:1 formally:2 highdimensional:1 immorlica:1 support:3 searched:1 ethz:2 evaluate:1 nayar:1 |
3,808 | 4,448 | Accelerated Adaptive Markov Chain
for Partition Function Computation?
Stefano Ermon, Carla P. Gomes
Dept. of Computer Science
Cornell University
Ithaca NY 14853, U.S.A.
Ashish Sabharwal
IBM Watson Research Ctr.
Yorktown Heights
NY 10598, U.S.A.
Bart Selman
Dept. of Computer Science
Cornell University
Ithaca NY 14853, U.S.A.
Abstract
We propose a novel Adaptive Markov Chain Monte Carlo algorithm to compute
the partition function. In particular, we show how to accelerate a flat histogram
sampling technique by significantly reducing the number of ?null moves? in the
chain, while maintaining asymptotic convergence properties. Our experiments
show that our method converges quickly to highly accurate solutions on a range of
benchmark instances, outperforming other state-of-the-art methods such as IJGP,
TRW, and Gibbs sampling both in run-time and accuracy. We also show how obtaining a so-called density of states distribution allows for efficient weight learning
in Markov Logic theories.
1
Introduction
We propose a novel and general method to approximate the partition function of intricate probability
distributions defined over combinatorial spaces. Computing the partition function is a notoriously
hard computational problem. Only a few tractable cases are know. In particular, if the corresponding
graphical model has low treewidth, then the problem can be solved exactly using methods based on
tree decompositions, such as the junction tree algorithm [1]. The partition function for planar graphs
with binary variables and no external field can also be computed in polynomial time [2].
We will consider an adaptive MCMC sampling strategy, inspired by the Wang-Landau method [3],
which is a so-called flat histogram sampling strategy from statistical physics. Given a combinatorial
space and an energy function (for instance, describing the negative log-likelihood of each configuration), a flat histogram method is a sampling strategy based on a Markov Chain that converges to a
steady state where it spends approximately the same amount of time in states with a low density of
configurations (which are usually low energy states) as in states with a high density.
We propose two key improvements to the Wang-Landau method, namely energy saturation
and a focused-random walk component, leading to a new and more efficient algorithm called
FocusedFlatSAT. Energy saturation allows the chain to visit fewer energy levels, and the random walk style moves reduce the number of ?null moves? in the Markov chain. Both improvements
maintain the same global stationary distribution, while allowing us to go well beyond the domain of
spin glasses where the Wang-Landau method has been traditionally applied.
We demonstrate the effectiveness of our approach by a comparison with state-of-the-art methods to
approximate the partition function or bound it, such as Tree Reweighed Belief Propagation [4], IJGPSampleSearch [5], and Gibbs sampling [6]. Our experiments show that our approach outperforms
these approaches in a variety of problem domains, both in terms of accuracy and run-time.
The density of states serves as a rich description of the underlying probabilistic model. Once computed, it can be used to efficiently evaluate the partition function for all parameter settings without
?
Supported by NSF Expeditions in Computing award for Computational Sustainability (grant 0832782).
1
the need for further inference steps ? a stark contrast with competing methods for partition function
computation. For instance, in statistical physics applications, we can use it to evaluate the partition
function Z(T ) for all values of the temperature T . This level of abstraction can be a fundamental
advantage for machine learning methods: in fact, in a learning problem we can parameterize Z(?)
according to the model parameters that we want to learn from the training data. For example, in
the case of a Markov Logic theory [7, 8] with weights w1 , . . . , wK of its K first order formulas,
we can parameterize the partition function as Z(w1 , . . . , wK ). Upon defining an appropriate energy
function and obtaining the corresponding density of states, we can then use efficient evaluations of
the partition function to search for model parameters that best fit the training data, thus obtaining a
promising new approach to learning in Markov Logic Networks and graphical models.
2
Probabilistic model and the partition function
We focus on intricate probability distributions defined over a set of configurations, i.e., assignments
to a set of N discrete variables {x1 , . . . , xN }, assumed here to be Boolean for simplicity. The
probability distribution is specified through a set of combinatorial features or constraints over these
variables. Such constraints can be either hard or soft, with the i-th soft constraint Ci being associated
with a weight wi . Let ?i (x) = 1 if a configuration x violates Ci , and 0 otherwise. The probability
Pw (x) of x is defined as 0 if x violates any hard constraint, and as
!
X
1
Pw (x) =
exp ?
wi ?i (x)
Z(w)
(1)
Ci ?Csoft
otherwise, where Csoft is the set of soft constraints. The partition function, Z(w), is simply the
normalization constant for this probability distribution, and is given by:
!
Z(w) =
X
exp ?
X
wi ?i (x)
(2)
Ci ?Csoft
x?Xhard
where Xhard ? {0, 1}N is the set of configurations satisfying all hard constraints. Note that as
wi ? ?, the soft constraint Ci effectively becomes a hard constraint. This factored representation
is closely related to a graphical model where we use weighted Boolean formulas to specify clique
potentials. This is a natural framework for combining purely logical and probabilistic inference,
used for example to define grounded Markov Logic Networks [8, 9].
The partition function is a very important quantity but computing it is a well-known computational
challenge, which we propose to address by employing the ?density of states? method to be discussed
shortly. We will compare our approach against several state-of-the-art methods available for computing the partition function or obtaining bounds on it. Wainwright et al. [4], for example, proposed
a variational method known as tree re-weighting (TRW) to obtain bounds on the partition function
of graphical models. Unlike standard Belief Propagation schemes which are based on Bethe free energies [10], the TRW approach uses a tree-reweighted (TRW) free energy which consists of a linear
combination of free energies defined on spanning trees of the model. Using convexity arguments it
is then possible to obtain upper bounds on various quantities, such as the partition function.
Based on iterated join-graph propagation, IJGP-SampleSearch [5] is a popular solver for the probability of evidence problem (i.e., partition function computation with a subset of ?evidence? variables
fixed) for general graphical models. This method is based on an importance sampling scheme which
is augmented with systematic constraint-based backtracking search. An alternative approach is to
use Gibbs sampling to estimate the partition function by estimating, using sample average, a sequence of multipliers that correspond to the ratios of the partition function evaluated at different
weight levels [6]. Lastly, the partition function for planar graphs where all variables are binary and
have only pairwise interactions (i.e., the zero external field case) can be calculated exactly in polynomial time [2]. Although we are interested in algorithms for the general (intractable) case, we used
the software associated with this approach to obtain the ground truth for planar graphs and evaluate
the accuracy of the estimates obtained by other methods.
2
3
Density of states
Our approach for computing the partition function is based on solving the density of states problem.
Given a combinatorial space such as the one defined earlier and an energy function E : {0, 1}N ?
R, the density of states (DOS) n is a function n : range(E) ? N that maps energy levels to the
number of configurations with that energy, i.e., n(k) = |{? ? {0, 1}N | E(?) = k}|. In our context,
we are interested in computing the number of configurations that satisfy certain properties that are
specified using an appropriate energy function. For instance, we might define the energy E(?) of a
configuration ? to be the number of hard constraints that are violated by ?. Or we may use the sum
of the weights of the violated soft constraints.
Once we are able to compute the full density of states, i.e., the number of configurations at each
possible energy level, it is straightforward to evaluate the partition function Z(w) for any weight
vector w, by summing up terms of the form n(i) exp(?E(i)), where E(i) denotes the energy of
every configuration in state i. This is the method we use in this work for estimating the partition
function. More complex energy functions may be defined for other related tasks, such as weight
learning, i.e., given some training data x ? X = {0, 1}N , computing arg maxw Pw (x) where
Pw (x) is given by Equation (1). Here we can define the energy E(?) to be w ? `, where ` =
(`1 , . . . , `M ) gives the number of constraints of weight wi violated by ?. Our focus in the rest of
the paper will thus be on computing the density of states efficiently.
3.1
The MCMCFlatSAT algorithm
MCMCFlatSAT [11] is an Adaptive Markov Chain Monte Carlo (adaptive MCMC) method for
computing the density of states for combinatorial problems, inspired by the Wang-Landau algorithm
[3] from statistical physics. Interestingly, this algorithm does not make any assumption about the
form or semantics of the energy. At least in principle, the only thing it needs is a partitioning of the
state space, where the ?energy? just provides an index over the subsets that compose the partition.
The algorithm is based on the flat histogram idea and works by trying to construct a reversible
Markov Chain on the space {0, 1}N of all configurations such that the steady state probability of a
configuration ? is inversely proportional to the density of states n(E(?)). In this way, the stationary
distribution is such that all the energy levels are visited equally often (i.e., when we count the visits
to each energy level, we see a flat visit histogram). Specifically, we define a Markov Chain with the
following transition probability:
(
p???0 =
1
N
n
o
n(E(?))
min 1, n(E(?
dH (?, ? 0 ) = 1
0 ))
dH (?, ? 0 ) > 1
0
(3)
0
where dH (?, ? 0 ) is the Hamming distance between
P ? and ? . The probability of a self-loop p???
is given by the normalization constraint p??? + ?0 |dH (?,?0 )=1 p???0 = 1. The detailed balance
equation P (?)p???0 = P (? 0 )p?0 ?? is satisfied by P (?) ? 1/n(E(?)). This means1 that the
Markov Chain will reach a stationary probability distribution P (regardless of the initial state) such
that the probability of a configuration ? with energy E = E(?) is inversely proportional to the number of configurations with energy E. P
This leads to an asymptotically flat histogram of the energies
1
of the states visited because P (E) = ?:E(?)=E P (?) ? n(E) n(E)
= 1 (i.e., independent of E).
Since the density of states is not known a priori, and computing it is precisely the goal of the algorithm, it is not possible to construct directly a random walk with transition probability (3). However
it is possible to start with an initial guess g(?) for n(?) and keep updating this estimate g(?) in a
systematic way to produce a flat energy histogram and simultaneously make the estimate g(E) converge to the true value n(E) for every energy level E. The estimate is adjusted using a modification
factor F which controls the trade-off between the convergence rate of the algorithm and its accuracy
(large initial values of F lead to fast convergence to a rather inaccurate solution). For completeness,
we provide the pseudo-code as Algorithm 1; see [11] for details.
1
The chain is finite, irreducible, and aperiodic, therefore ergodic.
3
Algorithm 1 MCMCFlatSAT algorithm to compute the density of states
1:
2:
3:
4:
5:
6:
7:
8:
9:
Start with a guess g(E) = 1 for all E = 1, . . . , m
Initialize H(E) = 0 for all E = 1, . . . , m
Start with a modification factor F = F0 = 1.5
repeat
Randomly pick a configuration ?
repeat
Generate a new configuration ? 0 (by flipping a variable)
Let E = E(?) and E 0 = E(? 0 ) (saturated
oenergies)
n
Set ? = ? 0 with probability min 1,
g(E)
g(E 0 )
(move acceptance/rejection step)
10:
Let Ec = E(?) be the current energy level
11:
Adjust the density g(Ec ) = g(Ec ) ? F
12:
Update visit histogram H(Ec ) = H(Ec ) + 1
13:
until H is flat (all?the values are at least 90% of the maximum value)
14:
Reduce F , F ? F
15:
Reset the visit histogram H
16: until F is close enough
P to 1
17: Normalize g so that E g(E) = 2N
18: return g as estimate of n
4
FocusedFlatSAT: Efficient computation of density of states
We propose two crucial improvements to MCMCFlatSAT, namely energy saturation and
the introduction of a focused-random walk component, leading to a new algorithm called
FocusedFlatSAT. As we will see in Table 1, FocusedFlatSAT provides the same accuracy as
MCMCFlatSAT but is about 10 times faster on that benchmark. Moreover, our results for the Ising
model (described below) in Figure 2 demonstrate that FocusedFlatSAT scales much better.
Energy saturation. The time needed for each iteration of MCMCFlatSAT to converge is significantly affected by the number of different non-empty energy levels (buckets). In many cases, the
weights defining the probability distribution Pw (x) are all positive (i.e., there is an incentive to satisfy the constraints), and as an effect of the exponential discounting in Equation (1), configurations
that violate a large number of constraints have a negligible contribution to the sum defining the partition function Z. We therefore define a new saturated energy function E 0 (?) = min{E(?), K},
where K is a user-defined parameter. For the positive weights case, the partition function Z 0 associated with the saturated energy function is a guaranteed upper bound on the original Z, for any K.
When all constraints are hard, Z 0 = Z for any value K ? 1 because only the first energy bucket
matters. In general, when soft constraints are present, the bound gets tighter as K increases, and we
can obtain theoretical worst-case error bounds when K is chosen to be a percentile of the energy
distribution (e.g., saturation at median energy yields a 2x bound). In our experiments, we set K to be
the average number of constraints violated by a random configuration, and we found that the error
introduced by the saturation is negligible compared to other inherent approximations in density of
states estimation. Intuitively, this is because the states where the probability is concentrated turn out
to typically have a much lower energy than K, and thus an exponentially larger contribution to Z.
Furthermore, energy saturation preserves the connectivity of the chain.
Focused Random Walk. Both in the original Wang-Landau method and in MCMCFlatSAT, new
configurations are generated by flipping a variable selected uniformly at random [3, 11]. Let us
call this configuration selection distribution the proposal distribution, and let T???0 denote the
probability of generating a ? 0 from this distribution while in configuration ?. In the Wang-Landau
algorithm, proposed configurations are then rejected with a probability that depends on the density
of states of the respective energy levels. Move rejections obviously lengthen the mixing time of
the underlying Markov Chain. We introduce here a novel proposal distribution that significantly
reduces the number of move rejections, resulting in much faster convergence rates. It is inspired by
local search SAT solvers [12] and is especially critical for the class of highly combinatorial energy
functions we consider in this work. We note that if the acceptance probability is taken to be
n(E(?)) T?0 ??
min 1,
n(E(? 0 )) T???0
4
1400000
MCMCFlatSAT
Number of moves
Number of moves
3000000
2500000
2000000
Acc. up
1500000
Acc. same
Acc. down
1000000
Rej. up
500000
Rej. same
0
FocusedFlatSAT
1000000
Acc. up
800000
Acc. same
600000
Acc. down
400000
Rej. up
Rej. same
0
Rej. down
1
23
45
67
89
111
133
155
177
199
221
243
265
287
309
331
353
375
200000
1
23
45
67
89
111
133
155
177
199
221
243
265
287
309
331
353
375
Rej. down
1200000
Energy level
Energy level
Figure 1: Histograms depicting the number of proposed moves accepted and rejected. Left: MCMCFlatSAT. Right: FocusedFlatSAT. See PDF for color version.
the properties of the steady state distribution are preserved as long as the proposal distribution is
such that the ergodicity property is maintained.
In order to understand the motivation behind the new proposal distribution, consider the move acceptance/rejection histogram shown in the left panel of Figure 1. For the instance under consideration,
MCMCFlatSAT converged to a flat histogram after having visited each of the 385 energy levels (on
x-axis) roughly 2.6M times. Each colored region shows the cumulative number of moves (on y-axis)
accepted or rejected from each energy level (on x-axis) to another configuration with a higher, equal,
or lower energy level, resp. This gives six possible move types, and the histogram shows how often
is each taken at any energy level. Most importantly, notice that at low energy levels, a vast majority
of the moves were proposed to a higher energy level and were rejected by the algorithm (shown as
the dominating purple region). This is an indirect consequence of the fact that in such instances, in
the low energy regime, the density of states increases drastically as the energy level is increases, i.e.,
g(E 0 ) g(E) when E 0 > E. As a result, most of the proposed moves are to higher energy levels
and are in turn rejected by the algorithm in the move acceptance/rejection step discussed above.
In order to address this issue, we propose to modify the proposal distribution in a way that increases
the chance of proposing moves to the same or lower energy levels, despite the fact that there are
relatively few such moves. Inspired by local search SAT solvers, we enhance MCMCFlatSAT with
a focused random walk component that gives preference to selecting variables to flip from violated
constraints (if any), thereby introducing an indirect bias towards lower energy states. Specifically,
if the given configuration ? is a satisfying assignment, pick a variable uniformly at random to be
flipped (thus T???0 = 1/N when the Hamming distance dH (?, ? 0 ) = 1, zero otherwise). If ? is
not a solution, then with probability p a variable to be flipped is chosen uniformly at random from
a randomly chosen violated constraint, and with probability 1 ? p a variable is chosen uniformly at
random. With this approach, when ? is not solution and ? and ? 0 differ only on the i-th variable,
P
1
c?C|i?c ?c (?) ? 1/|c|
P
T???0 = (1 ? p) + p
N
c?C ?c (?)
where ?c (?) = 1 iff ? violates constraint c and |c| denotes the number of variables in constraint c.
With this proposal distribution we ensure that for all 1 > p ? 0 whenever T???0 > 0, we also have
T?0 ?? > 0. Moreover, the connectivity of the Markov Chain is preserved (since we don?t remove
any edge from the original Markov Chain). We therefore have the following result:
Proposition 1 For all p ? [0, 1), the Markov Chain with proposal distribution T???0 defined above
is irreducible and aperiodic. Therefore it has a unique stationary distribution, given by 1/n(E(?)).
The right panel of Figure 1 shows the move acceptance/rejection histogram when FocusedFlatSAT
is used, i.e., with the above proposal distribution. The same instance now needs under 1.2M visits
per energy level for the method to converge. Moreover, the number of rejected moves (shown in
purple and green) in low energy states is significantly fewer than the dominating purple region in the
left panel. This allows the Markov Chain to move more freely in the space and to converge faster.
Figure 2 shows a runtime comparison of FocusedFlatSAT against MCMCFlatSAT on n ? n Ising
models (details to be discussed in Section 5). As we see, incorporating energy saturation reduces the
time to convergence (while achieving the same level of accuracy), and using focused random walk
moves further decreases the convergence time, especially as n increases.
5
30000
Time (s)
25000
MCMCFlatSAT
20000
MCMCFlatSAT+Saturation
15000
FocusedFlatSAT
10000
5000
0
0
10
20
30
40
Grid size n
Figure 2: Runtime comparison on ferromagnetic Ising models on square lattices of size n ? n.
Table 1: Comparison with model counters; only hard constraints. Runtime is in seconds.
Instance
2bitmax 6
wff-3-3.5
wff-3.1.5
wff-4-5.0
ls8-norm
5
n
m
252 766
150 525
100 150
100 500
301 1603
Exact #
FocusedFlatSat
Models
Time
2.10 ? 1029 1.91 ? 1029
1.40 ? 1014 1.43 ? 1014
1.80 ? 1021 1.86 ? 1021
9.31 ? 1016
5.40 ? 1011 5.78 ? 1011
MCMC-FlatSat
Models
Time
SampleCount
Models
Time
SampleMiniSAT
Models
Time
156 1.96 ? 1029 1863 ? 2.40 ? 1028
29 2.08 ? 1029
20 1.34 ? 1014
393 ? 1.60 ? 1013 145 1.60 ? 1013
1 1.83 ? 1021
21 ? 1.00 ? 1020 240 1.58 ? 1021
5 8.64 ? 1016
189 ? 8.00 ? 1015 120 1.09 ? 1017
231 5.93 ? 1011 2693 ? 3.10 ? 1010 1140 2.22 ? 1011
345
240
128
191
168
Experimental evaluation
We compare FocusedFlatSAT against several state-of-the-art methods for computing an estimate
of or bound on the partition function.2 An evaluation such as this is inherently challenging as the
ground truth is very hard to obtain and computational bounds can be orders of magnitude off from
the truth, making a comparison of estimates not very meaningful. We therefore propose to evaluate
the methods on either small instances whose ground truth can be evaluated by ?brute force,? or larger
instances whose ground truth (or bounds on it) can be computed analytically or through other tools
such as efficient model counters. We also consider planar cases for which a specialized polynomial
time exact algorithm is available. Efficient methods for handling instances of small treewidth are
also well known; here we push the boundaries to instances of relatively higher treewidth.
For partition function evaluation, we compare against the tree re-weighting (TRW) variational
method for upper bounds, the iterated join-graph propagation (IJGP), and Gibbs sampling; see Section 2 for a very brief discussion of these approaches. For weight learning, we compare against
the Alchemy system. Unless otherwise specified, the energy function used is always the number of
violated constraints, and we use a 50% ratio of random moves (p = 0.5). The algorithm is run for
20 iterations, with an initial modification factor F0 = 1.5. The experiments were conducted on a
16-core 2.4 GHz Intel Xeon machine with 32 GB memory, running RedHat Linux.
Hard constraints. First, consider models with only hard constraints, which define a uniform measure on the set of satisfying assignments. In this case, the problem of computing the partition function is equivalent to standard model counting. We compare the performance of FocusedFlatSAT
with MCMC-FlatSat and with two state-of-the-art approximate model counters: SampleCount
[13] and SampleMiniSATExact [14]. The instances used are taken from earlier work [11]. The results in Table 1 show that FocusedFlatSAT almost always obtains much more accurate solution
counts, and is often significantly faster (about an order of magnitude faster than MCMC-FlatSat).
Soft Constraints. We consider P
Ising Models defined on an n ? n square lattice where P (?) =
P
? exp(?E(?)) with E(?) =
(i,j) wij I[?i 6= ?j ]. Here I is the indicator function. This
imposes a penalty wij if spins ?i and ?j are not aligned. We consider a ferromagnetic case where
wij = w > 0 for all edges, and a frustrated case with a mixture of positive and negative interactions.
The partition function for these planar models is computable with a specialized polynomial time
algorithm, as long as there is no external magnetic field [2]. In Figure 3, we compare the true value
of the partition function Z ? with the estimate obtained using FocusedFlatSAT and with the upper
2
Benchmark instances available online at http://www.cs.cornell.edu/?ermonste
6
80
300
250
60
Log10(Z)-Log10(Z*)
Log10(Z)-Log10(Z*)
70
50
40
FocusedFlatSAT
30
TRW
20
10
200
150
FocusedFlatSAT
100
TRW
50
0
0
0
1
2
3
4
5
6
-50
0
weight w
1
2
3
4
5
6
weight w
Figure 3: Error in log10 (Z). Left: 40 ? 40 ferromagnetic grid. Right: 32 ? 32 spin glass grid.
Table 2: Log partition function for weighted formulas.
Instance
grid32x32
grid32x32
grid40x40
2bitmax6
2bitmax6
wff.100.150
wff.100.150
ls8-normalized
ls8-normalized
ls8-normalized
ls8-normalized
ls8-simplified-2
ls8-simplified-4
ls8-simplified-5
n
m
Weight
log10 Z(w)
1024
1024
1600
252
252
100
100
301
301
301
301
172
119
83
3968
3968
6240
766
766
150
150
1603
1603
1603
1603
673
410
231
1
1
1
5
5
5
8
3
6
6
6
6
6
6
16.0920
16.0920
23.5434
> 29.3222
> 29.3222
> 21.2553
> 21.2553
> 11.7324
> 11.7324
> 11.7324
> 11.7324
> 4.3083
> 2.2479
> 1.3424
FocusedFlatSat
log10 Z(w) Time
16.0964
628
16.0964
628
23.4844 1522
30.4373
360
30.4373
360
21.3187
5
21.2551
5
17.6655
589
11.7974
589
11.7974
589
11.7974
589
4.3379
100
2.3399
63
1.3880
40
IJGP-SampleSearch
log10 Z(w) Time
14.4330
600
13.8980 2000
15.9386 2000
12.0526
600
12.3802 2000
21.3373
200
21.2694
200
16.5458
600
-2.3987
600
-1.7459 1200
-1.8578 2000
-1.8305 1200
2.7037 1200
1.3688
600
Gibbs
log10 Z(w)
15.4856
Time
651
22.3125
25.1274
1650
732
21.3992
21.3107
8.6825
-17.356
40
40
708
770
2.8516
-6.7132
1.3420
300
174
51
bound given by TRW (which is generally much faster but inaccurate), for a range of w values. What
is plotted is the accuracy, log Z ? log Z ? . We see that the estimate provided by FocusedFlatSAT
is very accurate throughout the range of w values. For the ferromagnetic model, the bounds obtained
by TRW, on the other hand, are tight only when the weights are sufficiently high, when essentially
only the two ground states of energy zero matter. On spin glasses, where computing ground states is
itself an intractable problem, TRW is unsurprisingly inaccurate even in the high weights regime. The
consistent accuracy of FocusedFlatSAT here is a strong indication that the method is accurately
computing the density of most of the underlying states. This is because, as the weight w changes,
the value of the partition function is dominated by the contributions of a different set of states.
Table 2 (top) shows a comparison with IJGP-SampleSearch and Gibbs Sampling for the ferromagnetic case with w = 1. Here FocusedFlatSAT provides the most accurate estimates, even
when other methods are given a longer running time. E.g., IJGP is two orders of magnitude off
for the 32 ? 32 grid.3 Results with other weights are similar but omitted due to limited space.
FocusedFlatSAT also significantly outperforms IJGP and Gibbs sampling in accuracy on the
circuit synthesis instance 2bitmax6. All methods perform well on randomly generated 3-SAT instances, but FocusedFlatSAT is much faster.
As another test case, we use formulas from a previously used model counting benchmark involving
n ? n Latin Square completion [11], and add a weight w to each constraint. Since these instances
have high treewidth, are non-planar, and beyond the reach of direct enumeration, we don?t have
ground truth for this benchmark. However, we are able to provide a lower bound,4 which is given
by the number of models of the original formula. Our results are reported in Table 2. Our lower
bound indicates that the estimate given by FocusedFlatSAT is more accurate, even when other
methods are given a longer running time. As the last 3 lines of the table show, IJGP and Gibbs
sampling improve in performance as the problem is simplified more and more, by fixing the values
of 2, 4, or 5 ?cells? and simplifying the instance. Nonetheless, on the un-simplified ls8-normalized
with weight 6, both IJGP and Gibbs sampling underestimate by over 12 orders of magnitude.
3
On smaller instances with limited treewidth, IJGP-SampleSearch quickly provides good estimates.
The upper bound provided by TRW is very loose on this benchmark (possibly because of the conversion
to a pairwise field) and not reported.
4
7
Table 3: Weight learning: likelihood of the training data x computed using learned weights.
Type
Training Data
ThreeChain(30)
x
x
x
x
x
x
x
x
FourChain(5)
HChain(10)
SocialNetwork(5)
=data-30-1
=data-30-2
=dataFC-5-1
=dataFC-5-2
=dataH-10-1
=dataH-10-2
=data-SN-1
=data-SN-2
Optimal
Likelihood (O)
4.09 ? 10?27
9.31 ? 10?10
5.77 ? 10?6
3.84 ? 10?3
1.19 ? 10?9
2.62 ? 10?9
2.98 ? 10?8
2.44 ? 10?9
FocusedFlatSAT
Accuracy (F/O)
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
Alchemy
Accuracy (A/O)
0.08
0.93
0.61
0.000097
0.87
0.53
0.69
0.2
Weight learning. Suppose the set of soft constraints Csoft is composed of M disjoint sets of constraints {Si }M
i=1 , where all the constraints c ? Si have the same weight wi that we wish to learn
from data (for instance, these constraints can all be groundings of the same first order formula in
Markov Logic [8]). Let us assume for simplicity that there are no hard constraints. The probability
Pw (x) can be parameterized by a weight vector wP
= (w
wM ). The key observation is that
P1 , . . . , P
the partition function can be written as Z(w) =
.
.
.
`1
`2
`M n(`1 , . . . , `M ) exp (?w ? `),
where n(`1 , . . . , `M ) gives the number of configurations that violate `i constraints of type Si for
i = 1, . . . , M . This function n(`1 , . . . , `M ) is precisely the density of states required to compute
Z(w) for all values of w, without additional inference steps.
Given training data x ? {0, 1}N , the problem of weight learning is that of finding arg maxw Pw (x)
where Pw (x) is given by Eqn. (1). Once we compute n(`1 , . . . , `M ) using FocusedFlatSAT,
we can efficiently evaluate Z(w), and therefore Pw (x), as a function of the parameters w =
(w1 , . . . , wM ). Using this efficient evaluation as a black-box, we can solve the weight learning
problem using a numerical optimization package with no additional inference steps required.5
We evaluate this learning method on relatively simple instances on which commonly used software
such as Alchemy can be a few orders of magnitude off from the optimal likelihood of the training
data. Specifically, Table 3 compares the likelihood of the training data under the weights learned by
FocusedFlatSAT and by Generative Weight Learning [7], as implemented in Alchemy, for four
types of Markov Logic theories. The Optimal Likelihood value is obtained using a partition function
computed either by direct enumeration or using analytic results for the synthetic instances.
The instance ThreeChain(K) is a grounding of the following first order formulas ?xP (x) ?
Q(x), ?xQ(x) ? R(x), ?xR(x) ? P (x) while FourChain(K) is a similar chain of 4 implications. The instance HChain(K) is a grounding of ?xP (x) ? Q(x) ? R(x), ?xR(x) ? P (x) where
x ? {a1 , a2 , . . . , aK }. The instance SocialNetwork(K) (from the Alchemy Tutorial) is a grounding of the following first order formulas where x, y ? {a1 , a2 , . . . , aK }: ?x ?y F riend(x, y) ?
(Smokes(x) ? Smokes(y)), ?x Smokes(x) ? Cancer(x).
Table 3 shows the accuracy of FocusedFlatSAT and Alchemy for the weight learning task, as
measured by the resulting likelihood of observing the data in the learned model, which we are trying
to maximize. The accuracy is measured as the ratio of the optimal likelihood (O) and the likelihood
in the learned model (F and A, resp.). In these instances, FocusedFlatSAT always matches the
optimal likelihood up to two digits of precision, while Alchemy can underestimate it by several
orders of magnitude, e.g., by over 4 orders in the case of FourChain(5).
6
Conclusion
We introduced FocusedFlatSAT, a Markov Chain Monte Carlo technique based on the flat histogram method with a random walk style component to estimate the partition function from the
density of states. We demonstrated the effectiveness of our approach on several types of problems.
Our method outperforms the current state-of-the-art techniques on a variety of instances, at times
by several orders of magnitude. Moreover, from the density of states we can obtain directly the
partition function Z(w) as a function of the model parameters w. We show an application of this
property to weight learning in Markov Logic Networks.
5
Storing the full density function n(`1 , . . . , `M ) of course requires space (and hence time) that is exponential in M . One must use a relatively coarse partitioning of the state space for scalability when M is large.
8
References
[1] Martin J Wainwright and Michael I Jordan. Graphical Models, Exponential Families, and Variational
Inference. Now Publishers Inc., Hanover, MA, USA, 2008.
[2] N.N. Schraudolph and D. Kamenetsky. Efficient exact inference in planar Ising models. In Proc. of
NIPS-08, 2008.
[3] F. Wang and DP Landau. Efficient, multiple-range random walk algorithm to calculate the density of
states. Physical Review Letters, 86(10):2050?2053, 2001.
[4] M.J. Wainwright, T.S. Jaakkola, and A.S. Willsky. A new class of upper bounds on the log partition
function. Information Theory, IEEE Transactions on, 51(7):2313?2335, 2005.
[5] Vibhav Gogate and Rina Dechter. SampleSearch: A Scheme that Searches for Consistent Samples. Journal of Machine Learning Research, 2:147?154, 2007.
[6] Mark Jerrum and Alistair Sinclair. The Markov chain Monte Carlo method: an approach to approximate
counting and integration, pages 482?520. PWS Publishing Co., Boston, MA, USA, 1997.
[7] P. Domingos, S. Kok, H. Poon, M. Richardson, and P. Singla. Unifying logical and statistical ai. In Proc.
of AAAI-06, pages 2?7, Boston, Massachusetts, 2006. AAAI Press.
[8] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62(1):107?136, 2006.
[9] H. Poon and P. Domingos. Sound and efficient inference with probabilistic and deterministic dependencies. In Proc. of AAAI-06, pages 458?463, 2006.
[10] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized
belief propagation algorithms. Information Theory, IEEE Transactions on, 51(7):2282?2312, 2005.
[11] S. Ermon, C. Gomes, and B. Selman. Computing the density of states of Boolean formulas. In Proc. of
CP-2010, 2010.
[12] B. Selman, H.A. Kautz, and B. Cohen. Local search strategies for satisfiability testing. In DIMACS Series
in Discrete Mathematics and Theoretical Computer Science, 1996.
[13] C.P. Gomes, J. Hoffmann, A. Sabharwal, and B. Selman. From sampling to model counting. In Proc. of
IJCAI-07, 2007.
[14] V. Gogate and R. Dechter. Approximate counting by sampling the backtrack-free search space. In Proc.
of AAAI-07, pages 198?203, 2007.
9
| 4448 |@word version:1 pw:9 polynomial:4 norm:1 decomposition:1 simplifying:1 pick:2 thereby:1 initial:4 configuration:25 series:1 selecting:1 interestingly:1 outperforms:3 current:2 si:3 written:1 must:1 dechter:2 numerical:1 partition:39 lengthen:1 analytic:1 remove:1 update:1 bart:1 stationary:4 generative:1 fewer:2 guess:2 selected:1 core:1 colored:1 provides:4 completeness:1 coarse:1 preference:1 height:1 direct:2 consists:1 compose:1 introduce:1 pairwise:2 samplesearch:5 intricate:2 roughly:1 p1:1 inspired:4 freeman:1 landau:7 alchemy:7 enumeration:2 solver:3 becomes:1 provided:2 estimating:2 underlying:3 moreover:4 panel:3 circuit:1 null:2 what:1 spends:1 proposing:1 finding:1 pseudo:1 every:2 runtime:3 exactly:2 partitioning:2 control:1 grant:1 brute:1 positive:3 negligible:2 local:3 modify:1 consequence:1 despite:1 ak:2 approximately:1 might:1 black:1 challenging:1 co:1 limited:2 range:5 unique:1 testing:1 xr:2 digit:1 significantly:6 get:1 close:1 selection:1 context:1 www:1 equivalent:1 map:1 demonstrated:1 deterministic:1 go:1 straightforward:1 regardless:1 focused:5 ergodic:1 simplicity:2 factored:1 importantly:1 traditionally:1 resp:2 suppose:1 user:1 exact:3 us:1 domingo:3 satisfying:3 updating:1 ising:5 solved:1 wang:7 parameterize:2 worst:1 calculate:1 region:3 ferromagnetic:5 rina:1 trade:1 decrease:1 counter:3 convexity:1 solving:1 tight:1 purely:1 upon:1 accelerate:1 indirect:2 various:1 fast:1 monte:4 whose:2 larger:2 dominating:2 solve:1 otherwise:4 jerrum:1 richardson:2 itself:1 online:1 obviously:1 advantage:1 sequence:1 indication:1 propose:7 interaction:2 reset:1 aligned:1 combining:1 loop:1 mixing:1 iff:1 poon:2 description:1 normalize:1 scalability:1 convergence:6 empty:1 ijcai:1 produce:1 generating:1 converges:2 completion:1 fixing:1 measured:2 strong:1 implemented:1 c:1 treewidth:5 differ:1 sabharwal:2 closely:1 aperiodic:2 ermon:2 violates:3 proposition:1 tighter:1 adjusted:1 sufficiently:1 ground:7 exp:5 a2:2 omitted:1 estimation:1 proc:6 combinatorial:6 visited:3 singla:1 tool:1 weighted:2 always:3 rather:1 cornell:3 jaakkola:1 focus:2 improvement:3 likelihood:10 indicates:1 contrast:1 glass:3 inference:7 abstraction:1 inaccurate:3 typically:1 wij:3 interested:2 semantics:1 arg:2 issue:1 priori:1 art:6 integration:1 initialize:1 field:4 once:3 construct:2 having:1 equal:1 sampling:15 flipped:2 inherent:1 few:3 irreducible:2 randomly:3 composed:1 simultaneously:1 preserve:1 maintain:1 acceptance:5 highly:2 evaluation:5 adjust:1 saturated:3 mixture:1 behind:1 chain:20 implication:1 accurate:5 edge:2 respective:1 unless:1 tree:7 walk:9 re:2 plotted:1 theoretical:2 instance:28 xeon:1 soft:8 boolean:3 earlier:2 assignment:3 lattice:2 introducing:1 subset:2 uniform:1 conducted:1 reported:2 dependency:1 synthetic:1 density:27 fundamental:1 probabilistic:4 physic:3 systematic:2 off:4 michael:1 ashish:1 quickly:2 enhance:1 synthesis:1 riend:1 linux:1 ctr:1 w1:3 connectivity:2 satisfied:1 aaai:4 possibly:1 sinclair:1 external:3 leading:2 style:2 stark:1 return:1 potential:1 wk:2 matter:2 inc:1 satisfy:2 depends:1 observing:1 start:3 wm:2 kautz:1 expedition:1 contribution:3 purple:3 spin:4 accuracy:13 square:3 efficiently:3 correspond:1 yield:1 iterated:2 accurately:1 backtrack:1 carlo:4 notoriously:1 converged:1 acc:6 reach:2 whenever:1 against:5 underestimate:2 energy:59 nonetheless:1 associated:3 hamming:2 popular:1 massachusetts:1 logical:2 color:1 satisfiability:1 trw:11 higher:4 planar:7 specify:1 wei:1 evaluated:2 box:1 furthermore:1 just:1 rejected:6 lastly:1 ergodicity:1 until:2 hand:1 eqn:1 reversible:1 propagation:5 smoke:3 vibhav:1 grounding:4 effect:1 usa:2 normalized:5 multiplier:1 true:2 discounting:1 analytically:1 hence:1 wp:1 reweighted:1 self:1 maintained:1 steady:3 yorktown:1 percentile:1 dimacs:1 generalized:1 trying:2 pdf:1 demonstrate:2 csoft:4 stefano:1 temperature:1 cp:1 variational:3 consideration:1 novel:3 specialized:2 physical:1 cohen:1 exponentially:1 discussed:3 ijgp:10 gibbs:9 ai:1 grid:4 mathematics:1 f0:2 longer:2 add:1 reweighed:1 certain:1 outperforming:1 watson:1 binary:2 additional:2 freely:1 converge:4 maximize:1 full:2 violate:2 multiple:1 reduces:2 sound:1 faster:7 match:1 schraudolph:1 long:2 equally:1 visit:6 award:1 a1:2 involving:1 essentially:1 histogram:15 normalization:2 grounded:1 iteration:2 cell:1 proposal:8 preserved:2 want:1 median:1 crucial:1 ithaca:2 publisher:1 rest:1 unlike:1 kamenetsky:1 thing:1 effectiveness:2 jordan:1 call:1 counting:5 latin:1 enough:1 variety:2 fit:1 competing:1 reduce:2 idea:1 computable:1 six:1 gb:1 penalty:1 generally:1 detailed:1 amount:1 kok:1 concentrated:1 generate:1 http:1 nsf:1 tutorial:1 notice:1 disjoint:1 per:1 discrete:2 incentive:1 affected:1 key:2 four:1 achieving:1 vast:1 graph:5 asymptotically:1 sum:2 run:3 package:1 parameterized:1 letter:1 almost:1 throughout:1 family:1 bound:18 guaranteed:1 constraint:34 precisely:2 flat:10 software:2 dominated:1 argument:1 min:4 relatively:4 martin:1 according:1 combination:1 smaller:1 alistair:1 wi:6 modification:3 making:1 intuitively:1 bucket:2 taken:3 equation:3 previously:1 describing:1 count:2 turn:2 loose:1 needed:1 know:1 flip:1 tractable:1 serf:1 junction:1 available:3 hanover:1 yedidia:1 sustainability:1 appropriate:2 magnetic:1 alternative:1 shortly:1 rej:6 original:4 denotes:2 running:3 ensure:1 top:1 publishing:1 graphical:6 maintaining:1 log10:9 unifying:1 especially:2 move:22 quantity:2 flipping:2 hoffmann:1 strategy:4 dp:1 distance:2 majority:1 evaluate:7 spanning:1 willsky:1 code:1 index:1 gogate:2 ratio:3 balance:1 negative:2 perform:1 allowing:1 upper:6 conversion:1 observation:1 markov:23 benchmark:6 finite:1 defining:3 introduced:2 namely:2 required:2 specified:3 learned:4 nip:1 address:2 beyond:2 able:2 usually:1 below:1 regime:2 challenge:1 saturation:9 green:1 memory:1 belief:3 wainwright:3 critical:1 natural:1 force:1 indicator:1 scheme:3 improve:1 brief:1 inversely:2 axis:3 sn:2 xq:1 review:1 asymptotic:1 unsurprisingly:1 proportional:2 consistent:2 imposes:1 xp:2 principle:1 storing:1 ibm:1 cancer:1 course:1 supported:1 repeat:2 free:5 last:1 drastically:1 bias:1 understand:1 ghz:1 boundary:1 calculated:1 xn:1 transition:2 cumulative:1 rich:1 selman:4 commonly:1 adaptive:5 simplified:5 employing:1 ec:5 transaction:2 approximate:5 obtains:1 logic:8 clique:1 keep:1 global:1 sat:3 summing:1 gomes:3 assumed:1 don:2 search:7 un:1 table:10 promising:1 learn:2 bethe:1 inherently:1 obtaining:4 depicting:1 complex:1 constructing:1 domain:2 motivation:1 x1:1 augmented:1 intel:1 join:2 ny:3 precision:1 wish:1 exponential:3 weighting:2 formula:9 down:4 pws:1 evidence:2 intractable:2 incorporating:1 effectively:1 importance:1 ci:5 magnitude:7 push:1 rejection:6 boston:2 carla:1 simply:1 backtracking:1 maxw:2 truth:6 chance:1 frustrated:1 dh:5 ma:2 goal:1 towards:1 hard:12 change:1 specifically:3 reducing:1 uniformly:4 called:4 accepted:2 experimental:1 meaningful:1 mark:1 accelerated:1 violated:7 dept:2 mcmc:5 handling:1 |
3,809 | 4,449 | Policy Gradient Coagent Networks
Philip S. Thomas
Department of Computer Science
University of Massachusetts Amherst
Amherst, MA 01002
[email protected]
Abstract
We present a novel class of actor-critic algorithms for actors consisting of sets
of interacting modules. We present, analyze theoretically, and empirically evaluate an update rule for each module, which requires only local information: the
module?s input, output, and the TD error broadcast by a critic. Such updates are
necessary when computation of compatible features becomes prohibitively difficult and are also desirable to increase the biological plausibility of reinforcement
learning methods.
1
Introduction
Methods for solving sequential decision problems with delayed reward, where the problems are formulated as Markov decision processes (MDPs), have been compared to the learning mechanisms of
animal brains [3, 4, 9, 10, 13, 20, 22]. These comparisons stem from similarities between activation of dopaminergic neurons and reward prediction error [19], also called the temporal difference
(TD) error [21]. Dopamine is broadcast to large portions of the human brain, suggesting that it may
be used in a similar manner to the TD error in reinforcement learning (RL) [23] systems, i.e., to
facilitate improvements to the brain?s decision rules.
Systems with a critic that computes and broadcasts the TD error to another module called the actor,
which stores the current decision rule, are called actor-critic architectures. Chang et al. [7] present a
compelling argument that the fly brain is an actor-critic by finding the neurons making up the critic
and then artificially activating them to train the actor portions of the brain. However, current actorcritic methods in the artificial intelligence community remain biologically implausible because each
component of the actor can only be updated with detailed knowledge of the entire actor. This forces
computational neuroscientists to either create novel methods [14] or alter existing methods from the
artificial intelligence community in order to enforce locality constraints (e.g., [16]).
x1
I
Input,?s
t
x2
A1
?1
a1
A2
?2
a2
x3
A3
?3
a3
O t t
Output,?a
Figure 1: Example modular actor.
The actor in an actor-critic maintains a decision rule, ?, called a policy, parameterized by a vector
Input
?, that computes the probability
of an action (decision), a, given an estimate of the current state of
the world, st , and the current parameters, ?t . In
? some cases, an?actor can be broken into multiple
interacting modules, each of which computes an action given some input, x, Layer?1
which may contain
elements of s as well as the outputs of? other modules. ?An example of ?
such a modular actor is
provided in Figure 1. This actor consists of three modules, A1 , A2 , and A3 , with parameters
?1 , ?2 ,
?
Output
Layer
1
Layer?2
and ?3 , respectively. The ith module takes input xi , which is a subset of the state features and the
outputs of other modules. It then produces its action ai according to its policy, ? i (xi , ai , ?i ) =
Pr(ai |xi , ?i ). The output, a, of the whole modular actor is one of the module outputs?in this case
a = a3 . Later we modify this to allow the action a to follow any distribution with the state and
module outputs as parameters.
This modular policy can also be written as a non-modular policy that
is a function of ? = ?1 , ?2 , ?3 , i.e., ?(s, a, ?) = Pr(a|s, ?). We assume that the modular policy is
not recurrent. Such modular policies appear frequently in models of the human brain, with modules
corresponding to neurons or collections thereof [12, 16].
Current actor-critic methods (e.g. [11, 15, 23, 24]) require knowledge of ??/??i in order to update
?i . However, ??/??i often depends on the current values of all other parameters as well as the
structure defining how the parameters are combined to produce the decision rule. This is akin to
assuming that a neuron (or cluster of neurons), Ai , must know its influence on the final decision
rule implemented. Were another module to modify its policy such that ??/??i changes, a message
must be sent to alert Ai of the exact changes so that it can update its estimate of ??/??i , which is
biologically implausible.
Rather than keeping a current estimate of ??/??i , one might attempt to compute it on the fly via
the error backpropagation learning algorithm [17]. In this algorithm, each module, Ai , beginning
with the output modules, computes its own update and then sends a message containing ??/?aj
to each Aj that Ai uses as input (we call these Aj parents, and Ai a child of Aj ). Once all of
Ai ?s children have updated, it will have all of the information required to compute ??/??i . Though
an improvement upon the naive message passing scheme, backpropagation remains biologically
implausible because it would require rapid transmission of information backwards along the axon,
which has not been observed [8, 28]. However, gradient descent remains one of the most frequently
used methods. For example, Rivest et al. [16] use gradient descent to update a modular actor, and
are forced to assume that certain derivatives are always one in order to maintain realistic locality
constraints.
This raises the question: could each module update given only local information that does not include explicit knowledge of ??/??i ? We assume that a critic exists that broadcasts the TD error, so
a module?s local information would consist of its input xi , which is not necessarily a Markov state
representation, its output ai , and the TD error. Though this has been achieved for tasks with immediate rewards [3, 26, 27], we are not aware of any such methods for tasks with delayed rewards. In
this paper we present a class of algorithms, called policy gradient coagent networks (PGCNs), that
do exactly this: they allow modules to update given only local information.
PGCNs are also a viable technique for non-biological reinforcement learning applications in which
??/?? is prohibitively difficult to compute. For example, consider an artificial neural network where
the output of each neuron follows some probability distribution over the reals. Though this would
allow for exploration at every level, rather than just at the level of primitive actions of the output
layer, expressions for ?(s, a, ?) would require a nested integral for every node and ??/?? would be
difficult to compute or approximate for networks with many neurons and layers. Because PGCNs
do not require knowledge of ??/??, they remain simple even in such cases, making them a practical
choice for complex parameterized policies.
2
Background
An MDP is a tuple M = (S, A, P, R, ds0 ), where S and A are the sets of possible states and actions
respectively, P gives state transition probabilities: P(s, a, s0 ) = Pr(st+1 =s0 |st =s, at =a), where t
is the current time step, R(s, a) = E[rt |st =s, at =a] is the expected reward when taking action a in
state s, and ds0 (s) = Pr(s0 =s). An agent A with time-variant parameters ?t ? ? (typically function
approximator weights, learning rates, etc.) observes the current state st , selects an action, at , based
on st and ?t , which is used to update the state according to P. It then observes the resulting state,
st+1 , receives uniformly bounded reward rt according to R, and updates its parameters to ?t+1 .
A policy is a mapping from states to probabilities of selecting each possible action. A?s policy ?
may be parameterized by a vector, ?, such that ?(s, a, ?) = Pr(at =a|st =s, ?t =?). We assume that
??(s, a, ?)/?? exists for all s, a, and ?. Let d?M (s) denote the stationary distribution over states
2
under the policy induced by ?. We can then write the average reward for ? as
#
"T ?1
X
1
JM (?) = lim
E
rt M, ? .
T ?? T
t=0
(1)
The state-value function, which maps states to the difference between the average reward and the
expected reward if the agent follows the policy induced by ? starting in the provided state, is
?
X
?
VM
(s) =
E[rt ? J(?)|M, s0 = s, ?].
(2)
t=1
?
?
Lastly, we define the TD error to be ?t = rt ? JM (?) + VM
(st+1 ) ? VM
(st ).
2.1
Policy Gradient
One approach to improving a policy for an MDP is to adjust the parameters ? to ascend the policy
gradient, ?? JM (?). For reviews of policy gradient methods, see [5, 15, 24]. A common variable
in policy gradient methods is the compatible features, ?sa = ?? log ?(s, a, ?). Bhatnagar et al. [5]
showed that ?t ?sa is an unbiased estimate of ?? JM (?) if s ? d?M (?) and a ? ?(s, ?, ?). This results
in a simple actor-critic algorithm, which we reproduce from [5]:
J?t+1 =(1 ? c?t )J?t + c?t rt
(3)
?t =rt ? J?t+1 + vt ? ?(st+1 ) ? vt ? ?(st )
(4)
vt+1 =vt + ?t ?t ?(st )
?t+1 =?t + ?t ?t ?st at ,
(5)
(6)
where J? is a scalar estimate of J, ?t remains the scalar TD error, ? is any function taking S to a
feature space for linear value function approximation, v is a vector of weights for the approximation
?
(s), c is a constant, and ?t and ?t are learning rate schedules such that
v ? ?(s) ? VM
?
?
?
X
X
X
?t =
?t = ?,
(?t2 + ?t2 ) < ?, ?t = o(?t ).
(7)
t=0
t=0
t=0
C
c
One example of such a schedule would be ?t = ?c??
and ?t = ???
, for some constants
+t2/3
C +t
?, ?C , ?, and ?C . We call this algorithm the vanilla actor-critic (VAC). Bhatnagar et al. [5] show
that under certain mild assumptions and in the limit as t ? ?, VAC will converge to a ?t that is
within a small neighborhood of a local maximum of JM (?).
Some more advanced actor-critic methods ascend the natural policy gradient [1, 5, 15],
e ? JM (?) = G(?)?1 ?? JM (?),
?
(8)
where G(?) = Es?d?M (?),a??(s,?,?) [?? log ?(s, a, ?)?? log ?(s, a, ?) ] is the Fisher information
matrix of the policy. To help differentiate between the two types of policy gradients, we refer to
the non-natural policy gradient as the vanilla policy gradient hereafter. One view of the natural
gradient is that it corrects for the skewing of the vanilla gradient that is induced by a particular
parameterization of the policy [2]. Empirical studies have found that ascending the natural gradient
results in faster convergence [1, 5, 15]. One algorithm for ascending the natural policy gradient is
the Natural-Gradient Actor-Critic with Advantage Parameters [5], which we abbreviate as NAC
and use in our case study.
T
VAC and NAC have a property, which we reference later as Property 1, that is common to almost all
other actor-critic methods: if the policy is a function of x = f (s), for any f , such that ?(s, a, ?) can
be written as ?(x, a, ?) or ?(f (s), a, ?), then updates to the policy parameters ? are independent of
s given x, a, and ?t . For example, if s = (s1 , s2 ) and f (s) = s1 so that the policy is a function of
only s1 , then the update to ? requires knowledge of only s1 , a, and ?t , and not s2 . This is one crucial
property will allow the actor to update given only local information.
VAC and NAC, as well as all other algorithms referenced, require computation of ?? log ?(s, a, ?).
Hence, none of these methods allow for local updates to modular policies, which makes them undesirable from a biological standpoint, and impractical for policies for which this derivative is prohibitively difficult to compute. However, by combining these methods with the CoMDP framework
reviewed in Section 2.2 and by taking advantage of Property 1, the updates to the actor can be
modified to satisfy the locality constraint.
3
2.2
Conjugate Markov Decision Processes
In this section we review the aspects of the conjugate Markov decision process (CoMDP) framework
that are relevant to this work. Though Thomas and Barto [25] present the CoMDP framework for the
discounted reward setting with finite state, action, and reward spaces, the extension to the average
reward and infinite setting used here is straightforward. To solve M , one may create a network of
agents A1 , A2 , . . . , An , where Ai has output ai ? Ai , where Ai is any space, though typically the
reals or integers. All agents receive the same reward. We focus on the case where Ai = {Ai , Ci }
are all actor critics, i.e., they contain an actor, Ai , and a critic, Ci . The action at ? A for M is
computed as at ? ?(s, a1 , a2 , . . . , anS
), for some distribution ?. Each agent Ai has parameters ?i
i
?
defining its policy. We define ? = j?{1,2,...,n}?{i} ?j to be the parameters of all agents other
than Ai . Each agent takes as input Q
si , which contains
Q the state of M and the outputs of an arbitrary
i
number of other agents: s ? S ? j Aj , where j Aj is the Cartesian product of the output sets
of all the Aj whose output is used as input to Ai . Notice that si are not the components of s, but
rather s is the state of M , while si is the input to Ai . We require the graph with nodes for each Ai
and a directed edge from Ai to Aj if Aj takes ai as part of its input, to be acyclic. Thus, the network
of agents must be feed-forward, so we can assume an ordering of Ai such that if aj is part of si ,
then j < i. When executing the modular policy, the policies of the Ai can be executed in this order
so that all requisite information for computing a module?s output is always available. Thomas and
Barto [25] call each Ai a coagent and the entire network a coagent network.
An agent Ai may treat the rest of the network and M as its environment, where it sees states sit and
takes actions ait resulting in reward rt (the same for all Ai ) and a transition to state sit+1 . This environment is called a conjugate Markov
process (CoMDP), which is an MDP M i = (S ?
Q
Q decision
j
i
i
i i
j
Ai is the action space, P i (si , ai , s?i ) =
jA , A , P , R , ds0 ) where S ?
j A is the state space,
Pr sit+1 = s?i |sit = si , ait = ai , M, ??i , Ri (si , ai ) = E rt |sit = si , ait = ai , M, ??i gives the expected reward when taking action a in state s, and dis0 is the distribution over initial states of M i .
We write ? i (si , ai , ?i ) to denote Ai ?s policy for M i . Notice that M i depends on ??i . Thus, as the
policies of other coagents change, so too does the CoMDP with which Ai interacts. While [25]
considers generic methods for handling this nonstationarity, we focus on the special case in which
all Ai are policy gradient methods.
Theorem 3 of [25] states that the policy gradient of M can be decomposed into the policy gradients
for all of the CoMDPs, M i :
?JM (?1 , ?2 , . . . , ?n )
?JM (?1 , ?2 , . . . , ?n ) ?JM (?1 , ?2 , . . . , ?n )
?JM (?1 , ?2 , . . . , ?n )
=
,
,
.
.
.
,
?[?1 , ?2 , . . . , ?n ]
??1
??2
??n
?JM 1 (?1 ) ?JM 2 (?2 )
?JM n (?n )
=
,
,...,
.
(9)
1
2
??
??
??n
Thus, if each coagent computes and follows the policy gradient based on the local environment that
it sees, the coagent network will follow its policy gradient on M .
Thomas and Barto [25] also show that the value functions for M and all the CoMDPs are the same
for all st , if the additional state components of M i are drawn according to the modular policy:
1
2
n
?
1
?
2
?
n
?
VM
(10)
1 (st ) = VM 2 (st ) = . . . = VM n (st ) = VM (st ).
The state-value based TD error is therefore the same as well:
?
?
?i
i
?i
i
?t = rt ? JM (?) + VM
(st+1 ) ? VM
(st ) = rt ? JM i (?i ) + VM
(11)
i (st+1 ) ? VM i (st ), ?i.
This means that, if the coagents require ?t , we can maintain a global critic, C, that keeps an estimate
?
of VM
, which can be used to replace every Ci by computing ?t and broadcasting it to each Ai .
Because all Ai share a global critic, C, all that remains of each module is the actor Ai . We therefore
refer to each Ai as a module.
Notice that the CoMDPs, M i , and thus the coagents, Ai , have S as part of their state space. This is
required for M i to remain Markov. However, if the actor?s policy is a function of some xi = f (si )
for any f , i.e., the policy can be written as ? i (xi , ai , ?i ), then, by Property 1, updates to the actor?s
policy require only the TD error, ai , and xi . Hence, the full Markovian state representation is only
needed by the global critic, C. The modules, Ai , will be able to perform their updates given only
their input: the xi portion of the state of M i .
4
3
Methods
The CoMDP framework tells us that, if each module is an actor that computes the policy gradient for
its local environment (CoMDP), then the entire modular actor will ascend its policy gradient. Actorcritics satisfying Property 1 are able to perform their policy updates given only local information:
the policy?s input xt , the most recent action at , and the TD error ?t . Combining these two, each
module Ai can compute its update given only its local input xit , most recent action ait , and the TD
error ?t . We call any network of coagents, each using policy gradient methods, a policy gradient
coagent network (PGCN). One PGCN is the vanilla coagent network (VCN), which uses VAC for
all modules (coagents), and maintains a global critic that computes and broadcasts ?t . The VCN
algorithm is depicted diagramatically in Figure 2, where ?xi i ,ai = ??i log ? i (xi , ai , ?i ) are the
compatible features for the ith module. Notice that ?t ?xi i ai is an unbiased estimate of the policy
t t
gradient for M i [5], which is an unbiased estimate of part of the policy gradient for M by Equation
9.
st , rt , st ?1
Global?Critic
J? t ?1 ? ((1 ? c? t ) J? t ? c? t rt
? t ? rt ? J? t ?1 ? vt ? ? ? st ?1 ? ? vt ? ? ? st ?
vt ?1 ? vt ? ? t? t? ? st ?
?t
xti
xti , ati , ? t
Ai
A t
Act:
ati ? ? i ? xti , ?,?ti ?
ati
Train: ?ti?1 ? ?ti ? ? t? t? xi i ai
t t
Figure 2: Diagram of the vanilla coagent network (VCN) algorithm. The global critic observes
st , rt , st+1 tuples, updates its estimate J? of the average reward, which it uses to compute the TD
error ?t , which is then broadcast to all of the modules, Ai . Lastly, it updates the parameters, v, of its
state-value estimate. Each module Ai draws its actions from ? i (xit , ?, ?ti ) and then computes updates
to ?i given its input xit , action ait , and the TD error, ?t , which was broadcast by the global critic.
To implement VCN, observe the current state st , compute the module outputs ait and then at =
?(st , a1t , a2t , . . . , ant ). This action will result in a transition to st+1 with reward rt . Given st , rt , and
st+1 the global critic can execute to produce ?t , which can then be used to train each module Ai .
Notice that the Ai can update concurrently. This process then repeats.
4
The Decomposed Natural Policy Gradient
Another interesting PGCN, which we call a natural coagent network (NCN), would use coagents
that ascend the natural policy gradient, e.g., NAC. However, Equation 9 does not hold for natural
gradients:
h
i
e ? JM (?) 6= ?
e ?1 JM 1 (?1 ), ?
e ?2 JM 2 (?2 ), . . . , ?
e ?n JM n (?n ) ? ?
b ? JM (?),
?
(12)
b ? JM (?) is an estimate of the natural policy gradient that we call
where ? = ?1 , ?2 , . . . , ?n and ?
the decomposed natural policy gradient, which has an implicit dependence on how ? is partitioned
into n components. Hence, a PGCN, where each module computes its natural policy gradient, would
b ?1 ?? JM (?), an approximation
b ? JM (?) = G(?)
not follow the natural policy gradient, but rather ?
b
thereto, where G(?)
is an approximation of G(?), constructed by:
0 if the i and jth elements of ? are in different modules
b ij =
G(?)
,
(13)
G(?k )ij if the i and jth elements of ? are both in module Ak
where G(?k ) is the Fisher information matrix of the kth module?s policy:
G(?k ) = Esk ?d?k
Mk
(?),ak ?? k (xk ,?,? k )
[??k log ? k (xk , ak , ?k )??k log ? k (xk , ak , ?k )T ],
(14)
where G(?k )ij in Equation 13 denotes the entry corresponding to the i and jth elements of ?, which
are elements of ?k .
The decomposed natural policy gradient is intuitively a trade-off between the natural policy gradient
and the vanilla policy gradient depending on the granularity of modularization. For example, if the
5
policy is one module, A1 , and ?(s, a1 ) = a1 , then the decomposed natural policy gradient is trivially the same as the natural policy gradient. On the other hand, as the policy is broken into more
and more modules, the gradient begins to differ more and more from the natural policy gradient,
because the structure of the modular policy begins to influence the direction of the gradient. With
b
finer granularity, G(?)
will tend to a diagonal approximation of the identity matrix. If the modular
actor contains one parameter per module and the module inputs are normalized, it is possible for
b ?1 = I, in which case the decomposed natural policy gradient will be equivalent to the vanilla
G(?)
policy gradient. Hence, the more coarse the modularization (fewer modules), the closer the decomposed natural policy gradient is to the natural policy gradient, while the finer the modularization
(more modules), the closer the decomposed natural policy gradient may come to the vanilla policy
gradient.
Each term of the decomposed natural policy gradient is within ninety degrees of the vanilla policy
gradient, so a system will converge to a local optimum if it follows the decomposed natural policy
gradient and the step size is decayed appropriately.
5
Variance of Gradient Estimates
Let ?s,a,i = ??i log ?(s, a, ?) be the components of ?s,a that correspond to the parameters of Ai .
Both ?t ?xi i ,ai , the update to the parameters of Ai by VCN, and ?t ?s,a,i , the update by VAC, are
unbiased estimates of ??i JM i (?i ) = ??i JM (?). This means that E[?t ?s,a,i ] = E[?t ?xi i ,ai ], which
is particularly interesting because ?t is the same for both, so the only difference between the two
are the compatible features used. Whereas ?s,a,i requires computation of the derivative of the entire
modular policy, ?, ?xi i ,ai only requires differentiation of ? i . Thus, the latter satisfies the locality
constraint, and is also easier to compute. However, this benefit comes at the cost of higher variance.
This increase in variance appears regardless of the actor-critic method used. In this section we focus
on VAC due to its simplicity, though the argument that stochasticity in the CoMDP is the root cause
of the variance of gradient estimates carries over to PGCNs using other actor-critic methods as well.
This increase in variance has also been observed in multi-agent reinforcement learning research as
additional stochasticity in one agent?s environment when another explores [18].
Consider using VAC on any MDP. Bhatnagar et al. [5] show that E[?t |st = s, at = a, M, ?] can
be viewed as the advantage of taking action at in state st over following the policy induced by
?. If it is positive, it means taking at in st is better than following ?. If it is negative, then at is
worse. So, following E[?t ?st ,at ] increases the likelihood of at if it is advantageous, and decreases
the likelihood of at if it is disadvantageous. However, our updates use samples rather than the
expected value, so an action at that is actually worse could, due to stochasticity in the environment,
result in a TD error that suggests it is advantageous. Thus, the gradient estimates are influenced by
the stochasticity of the transition function P and reward function R. If P or R is very stochastic,
the same s, a pair will result in seemingly random TD errors, which manifests as large variance in
?t ?st ,at samples.
Now consider the stochasticity in M and M i . The state transitions of M i depend not only on
M ?s transition function, but may also depend on the actions selected by some or all Aj , j 6= i.
Consider the modular actor from Figure 1 in the case where the transitions and rewards of M are
deterministic. The transition function for M 3 , the CoMDP for A3 , remains relatively deterministic
because its actions completely determine the transitions of M . We therefore expect the variance in
the gradient estimate for the parameters of A3 to be only slightly higher for VCN than it is for VAC.
However, the actions of A1 and A2 influence the transitions of M indirectly through the actions
of A3 , which adds a layer of stochasticity to their transition functions. We therefore expect policy
gradient estimates for their parameters to have higher variance. In summary, the stochasticity in the
CoMDPs is responsible for VCN?s policy gradient estimates having higher variance than those of
VAC.
We performed a simple study using the modular actor from Figure 1 on a 10 ? 10 gridworld
with deterministic actions {up, down, lef t, right}, a reward of ?1 for all transitions, factored
state (?
x, y?), and with a terminal state at (10, 10). For the modular actor, A1 = A2 = {0, 1},
3
A = {up, down, lef t, right}, A1 and A2 both received the full state (?
x, y?), and all modules used
6
0.16
Varia
ance
Varia
ance
0.2
0.2
0.18
0.16
0.14
0.12
01
0.1
0.08
0.06
0.04
0.02
0
VAC
0.12
A1
0.08
A2
VCN
A3
0.04
0
A1
A2
0
A3
0.2
0.4
0.6
0.8
1
?
Module
(a)
(b)
Figure 3: (a) Variance of the VAC and VCN updates for weights in each of the three modules. (b)
Variance of updates using VCN with various ?. Standard error bars are provided (n = 100).
linear function approximation rather than a tabular state representation. All modules also used softmax action selection:
i
i
e? ?a ?x
i ?xi ,
? ?a
?
a
??Ai e
? i (xi , a, ?i ) = P
(15)
where ? is a constant scaling the amount of exploration, and where the parameters ?i for the ith
module contain a weight vector ?ai for each action a ? Ai . The critic is common to both methods,
and our goal is not to compare methods for value function approximation, so we used a tabular critic.
With all actor weights fixed and selected randomly with uniform distribution from (?1, 1), we first
observed that the mean of the updates ?t ?st ,at ,i and ?t ?xi i ,ai are approximately equal, as expected,
and then computed the variance of both updates. The results are shown in Figure 3(a). As predicted,
the variance of the gradient estimates for each parameter of A1 and A2 is larger for VCN, though
the variance of the gradient estimate for each parameter of A3 is similar for VCN and VAC.
6
Variance Mitigation
To mitigate the increase in the variance of gradient estimates, we observe that, in general, the additional variance due to the other modules can be completely removed for a module Ai if every other
module is made to be deterministic. This is not practical because every module must explore in order
to learn. However, we can approximate it by decreasing the exploration of each module, making its
policy less stochastic and more greedy. For example, every module could take a deterministic greedy
action without performing any updates with probability 1 ? ? for some ? ? [0, 1). With probability
? the module would act using softmax action selection and update its parameters. As ? ? 0, the
probability of two modules exploring simultaneously goes to zero, decreasing the variance in M i
but also decreasing the percent of time steps during which each module trains. When ? = 1, every
module explores and updates on every step, so the algorithm is the original PGCN algorithm (VCN
if using VAC for each module).
We repeated the gridworld study of the variance in gradient estimates for various ?. The results,
shown in Figure 3(b), show that smaller ? can be effective in reducing the variance of gradient
estimates. Notice that VCN using ? = 1 is equivalent to VCN as described previously, so the
points for ? = 1 in Figure 3(b) correspond exactly to the VCN data in Figure 3(a). Thus, if the
variance in gradient estimates precludes learning, we suggest making the policies of the modules
more deterministic by decreasing exploration and increasing exploitation.
Several questions remain. First, though the variance decreases, the amount of exploration also decreases, so what is the net effect on learning speed? Second, how does PGCN compare to an actorcritic where ?? ?(s, a, ?) is known? Lastly, is there a significant loss in performance when using the
decomposed natural policy gradient as opposed to the true natural policy gradient? We attempt to
answer these questions in the following section.
7
Algorithm
VAC
VCN
NAC
NCN
?
0.75
0.25
0.5
0.5
?
0.25
0.1
0.1
0.1
c
0.13
0.04
0.02
0.02
?12
0.5
0.1
0.05
0.05
?3
2.5
3.5
1
1
Average Reward
?23.13
?29.15
?24.91
?28.32
Standard Error
0.09
0.09
0.08
0.14
Table 1: Best parameters found for each algorithm. The average reward per episode and standard
error are computed using 10000 samples (each a lifetime of 75 episodes). The optimization tested
each parameter set for 300 lifetimes, so the best parameters found still occasionally perform poorly.
We found the above parameters to perform poorly (average reward less than ?200) approximately
one in 500 lifetimes. These outliers were removed for the average reward calculations. Random
policy parameters average less than ?5000 reward per episode.
7
Case Study
In this section we compare the learning speed of VAC, VCN, NAC, and NCN. Our goal is to determine whether VCN and NCN perform similarly to VAC and NAC, which are established methods [6], even though VCN and NCN?s modules do not have access to ??/??i . To perform a thorough
analysis, we again use the modular actor depicted in Figure 1, as in Section 5. We therefore require
a problem with a simple optimal policy. We select the gridworld from Section 5, and again use a
tabular critic in order to focus on the difference in policy improvements. To decrease the size of the
parameter space, we did not decay ? nor ?. For all four algorithms, we performed a grid search
for the ?, ?, c, ?12 , and ?3 that maximize the average reward over 75 episodes, where ?12 is the ?
used by A1 and A2 , while ?3 is that of A3 . The best parameters are provided in Table 1. Recall
that the increased variance in VCN updates arises because A1 and A2 ?s actions only influence the
transitions of M indirectly through the actions of A3 . Though decreased exploration is beneficial in
general, for this particular modular policy it is therefore particularly important that A3 ?s exploration
be decreased by increasing ?3 . The optimization does just this, balancing the trade-off between
exploration and the variance of gradient estimates by selecting larger ?3 for VCN than VAC. The
mean ratio ?3 /?12 for the top 25 of the 202300 parameters tested was 5.48 for VAC and 31.04 for
VCN, further emphasizing the relatively smaller exploration of A3 . For NAC and NCN, the exploration parameters are identical, suggesting that the additional variance of gradient estimates was not
significant. This is likely due to the policy gradient estimates being filtered before being used.
The average rewards during a lifetime are similar, suggesting that, even though the variance of
gradient estimates can be orders larger for VCN with ?12 = ?3 = 1 (Figure 3(a)), exploration can
be tuned such that learning speed is not significantly diminished.
8
Conclusion
We have devised a class of algorithms, policy gradient coagent networks (PGCNs), and two specific instantiations thereof, the natural coagent network (NCN) and vanilla coagent network (VCN),
which allow modules within an actor to update given only local information. We show that the
NCN ascends the decomposed natural policy gradient, an approximation to the natural policy gradient, while VCN ascends the vanilla policy gradient. We discussed the theoretical properties of
both the decomposed natural policy gradient and the increase in the variance of gradient estimates
when using PGCNs. Lastly, we presented a case study to compare NCN and VCN to two existing
actor-critic methods, NAC and VAC. We showed that, even though NAC and VAC are provided with
additional non-local information, VCN and NCN perform comparably. We point out how VCN?s
similar performance is achieved by decreasing exploration in order to decrease the stochasticity of
each module?s CoMDP, and thus the variance of the gradient estimates.
Acknowledgements
We would like to thank Scott Kuindersma, Scott Niekum, Bruno Castro da Silva, Andrew Barto,
Sridhar Mahadevan, the members of the Autonomous Learning Laboratory, and the reviewers for
their feedback and contributions to this paper.
8
References
[1] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251?276, 1998.
[2] S. Amari and S. Douglas. Why natural gradient? In Proceedings of the 1998 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ?98), volume 2, pages 1213?1216, 1998.
[3] A. G. Barto. Learning by statistical cooperation of self-interested neuron-like computing elements. Human Neurobiology, 4:229?256, 1985.
[4] A. G. Barto. Adaptive critics and the basal ganglia. Models of Information Processing in the Basal
Ganglia, pages 215?232, 1995.
[5] S. Bhatnagar, R. S. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica,
45(11):2471?2482, 2009.
[6] S. Bhatnagar, R. S. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Technical
Report TR09-10, University of Alberta Department of Computing Science, June 2009.
[7] A. Claridge-Chang, R. Roorda, E. Vrontou, L. Sjulson, H. Li, J. Hirsh, and G. Miesenbock. Writing
memories with light-addressable reinforcement circuitry. Cell, 193(2):405?415, 2009.
[8] F. H. C. Crick. The recent excitement about neural networks. Nature, 337:129?132, 1989.
[9] N. Daw and K. Doya. The computational neurobiology of learning and reward. Current Opinion in
Neurobiology, 16:199?204, 2006.
[10] K. Doya. What are the computations of the cerebellum, the basal ganglia and the cerebral cortex? Neural
Networks, 12:961?974, 1999.
[11] K. Doya. Reinforcement learning in continuous time and space. Neural Computation, 12(1):219?245,
2000.
[12] M. J. Frank and E. D. Claus. Anatomy of a decision: Striato-orbitofrontal interactions in reinforcement
learning, decision making, and reversal. Psychological Review, 113(2):300?326, 2006.
[13] E. Ludvig, R. Sutton, and E. Kehoe. Stimulus representation and the timing of reward-prediction errors
in models of the dopamine system. Neural Computation, 20:3034?3035, 2008.
[14] R. C. O?Reilly. The LEABRA model of neural interactions and learning in the neocortex. PhD thesis,
Carnegie Mellon University.
[15] J. Peters and S. Schaal. Natural actor critic. Neurocomputing, 71:1180?1190, 2008.
[16] F. Rivest, Y. Bengio, and J. Kalaska. Brain inspired reinforcement learning. In Advances in Neural
Information Processing Systems, pages 1129?1136, 2005.
[17] D. E. Rumelhart and J. L. McClelland. Parallel distributed processing. Volume 1: Foundations. MIT
Press, Cambridge, MA, 1986.
[18] T. W. Sandholm and R. H. Crites. Multiagent reinforcement learning in the iterated prisoner?s dilemma.
Biosystems, 37:147?166, 1996.
[19] W. Schultz, P. Dayan, and P. Montague. A neural substrate of prediction and reward. Science, 275:1593?
1599, 1992.
[20] A. Stocco, C. Lebiere, and J. Anderson. Conditional routing of information to the cortex: A model of the
basal ganglia?s role in cognitive coordination. Psychological Review, 117(2):541?574, 2010.
[21] R. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9?44, 1988.
[22] R. Sutton and A. Barto. Toward a modern theory of adaptive networks: Expectation and prediction.
Psychological Review, 88:135?140, 1981.
[23] R. Sutton and A. Barto. Reinforcement learning: An introduction. MIT Press, Cambridge, MA, 1998.
[24] R. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning
with function approximation. In Advances in Neural Information Processing Systems 12, pages 1057?
1063, 2000.
[25] P. Thomas and A. Barto. Conjugate Markov decision processes. In Proceedings of the Twenty-Eighth
International Conference on Machine Learning, 2011.
[26] R. J. Williams. A class of gradient-estimating algorithms for reinforcement learning in neural networks.
In Proceedings of the IEEE First International Conference on Neural Networks, 1987.
[27] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine Learning, 8(3):229?256, 1992.
[28] D. Zipser and R. A. Andersen. A back propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331:679?684, 1988.
9
| 4449 |@word mild:1 exploitation:1 advantageous:2 carry:1 initial:1 contains:2 uma:1 selecting:2 hereafter:1 tuned:1 ati:3 existing:2 current:11 activation:1 si:10 written:3 must:4 realistic:1 update:35 stationary:1 intelligence:2 fewer:1 selected:2 greedy:2 parameterization:1 xk:3 beginning:1 ith:3 filtered:1 mitigation:1 coarse:1 node:2 tr09:1 alert:1 along:1 constructed:1 viable:1 consists:1 manner:1 theoretically:1 ascend:4 expected:5 rapid:1 frequently:2 nor:1 multi:1 brain:7 terminal:1 ascends:2 inspired:1 discounted:1 decomposed:13 decreasing:5 td:16 alberta:1 xti:3 jm:26 increasing:2 becomes:1 provided:5 begin:2 rivest:2 bounded:1 estimating:1 what:2 finding:1 differentiation:1 impractical:1 temporal:2 mitigate:1 every:8 thorough:1 act:2 ti:4 exactly:2 prohibitively:3 appear:1 positive:1 before:1 hirsh:1 referenced:1 local:14 modify:2 limit:1 treat:1 timing:1 sutton:7 ak:4 approximately:2 might:1 suggests:1 programmed:1 directed:1 practical:2 responsible:1 implement:1 ance:2 x3:1 backpropagation:2 addressable:1 empirical:1 significantly:1 reilly:1 suggest:1 undesirable:1 selection:2 influence:4 a2t:1 writing:1 equivalent:2 map:1 deterministic:6 reviewer:1 primitive:1 straightforward:1 starting:1 regardless:1 go:1 williams:2 simplicity:1 factored:1 rule:6 autonomous:1 updated:2 exact:1 substrate:1 us:3 element:6 rumelhart:1 satisfying:1 particularly:2 observed:3 role:1 module:60 fly:2 episode:4 ordering:1 trade:2 decrease:5 removed:2 observes:3 environment:6 broken:2 reward:30 ghavamzadeh:2 raise:1 solving:1 esk:1 depend:2 singh:1 dilemma:1 upon:1 completely:2 icassp:1 montague:1 various:2 train:4 forced:1 effective:1 artificial:3 tell:1 niekum:1 neighborhood:1 whose:1 modular:20 larger:3 solve:1 amari:2 precludes:1 final:1 seemingly:1 differentiate:1 advantage:3 net:1 interaction:2 product:1 relevant:1 combining:2 poorly:2 parent:1 cluster:1 transmission:1 convergence:1 optimum:1 produce:3 executing:1 help:1 depending:1 recurrent:1 andrew:1 ludvig:1 ij:3 received:1 sa:2 implemented:1 c:1 predicted:1 come:2 differ:1 direction:1 anatomy:1 stochastic:2 exploration:12 human:3 routing:1 mcallester:1 opinion:1 require:9 activating:1 biological:3 extension:1 exploring:1 hold:1 mapping:1 predict:1 circuitry:1 a2:13 coordination:1 create:2 mit:2 concurrently:1 always:2 modified:1 rather:6 barto:9 focus:4 xit:3 june:1 improvement:3 schaal:1 likelihood:2 varia:2 dayan:1 entire:4 typically:2 reproduce:1 selects:1 interested:1 animal:1 special:1 softmax:2 equal:1 once:1 aware:1 having:1 identical:1 alter:1 tabular:3 t2:3 report:1 stimulus:1 connectionist:1 modern:1 randomly:1 simultaneously:1 neurocomputing:1 delayed:2 consisting:1 maintain:2 attempt:2 neuroscientist:1 message:3 adjust:1 light:1 integral:1 tuple:1 edge:1 necessary:1 closer:2 theoretical:1 biosystems:1 mk:1 psychological:3 increased:1 compelling:1 markovian:1 cost:1 subset:2 entry:1 uniform:1 too:1 answer:1 combined:1 st:41 decayed:1 explores:2 amherst:2 international:3 lee:2 vm:13 corrects:1 off:2 again:2 thesis:1 andersen:1 containing:1 broadcast:7 opposed:1 worse:2 a1t:1 cognitive:1 derivative:3 li:1 suggesting:3 satisfy:1 depends:2 later:2 view:1 root:1 performed:2 analyze:1 portion:3 disadvantageous:1 pthomas:1 maintains:2 parallel:1 actorcritic:2 contribution:1 variance:28 efficiently:1 correspond:2 ant:1 iterated:1 comparably:1 none:1 bhatnagar:5 finer:2 implausible:3 influenced:1 nonstationarity:1 dis0:1 thereof:2 lebiere:1 massachusetts:1 manifest:1 knowledge:5 lim:1 recall:1 schedule:2 actually:1 back:1 appears:1 feed:1 higher:4 follow:3 response:1 execute:1 though:12 anderson:1 lifetime:4 just:2 implicit:1 lastly:4 hand:1 receives:1 propagation:1 aj:11 mdp:4 nac:10 facilitate:1 effect:1 contain:3 unbiased:4 normalized:1 true:1 hence:4 roorda:1 laboratory:1 cerebellum:1 during:2 self:1 silva:1 percent:1 novel:2 common:3 empirically:1 rl:1 volume:2 cerebral:1 discussed:1 refer:2 significant:2 mellon:1 cambridge:2 ai:66 vanilla:11 trivially:1 grid:1 similarly:1 stochasticity:8 bruno:1 access:1 actor:44 similarity:1 cortex:2 etc:1 add:1 posterior:1 own:1 showed:2 recent:3 store:1 certain:2 occasionally:1 vt:8 additional:5 converge:2 determine:2 maximize:1 signal:1 multiple:1 desirable:1 full:2 stem:1 technical:1 faster:1 plausibility:1 calculation:1 kalaska:1 devised:1 a1:16 prediction:4 variant:1 expectation:1 dopamine:2 achieved:2 cell:1 receive:1 background:1 whereas:1 decreased:2 diagram:1 sends:1 crucial:1 standpoint:1 appropriately:1 rest:1 claus:1 induced:4 tend:1 sent:1 simulates:1 member:1 call:6 integer:1 zipser:1 backwards:1 granularity:2 mahadevan:1 bengio:1 architecture:1 whether:1 expression:1 akin:1 peter:1 speech:1 passing:1 cause:1 action:32 skewing:1 detailed:1 amount:2 neocortex:1 mcclelland:1 notice:6 per:3 write:2 carnegie:1 basal:4 four:1 drawn:1 douglas:1 graph:1 parameterized:3 almost:1 doya:3 draw:1 decision:14 scaling:1 orbitofrontal:1 layer:6 constraint:4 kuindersma:1 x2:1 ri:1 aspect:1 speed:3 argument:2 performing:1 dopaminergic:1 relatively:2 department:2 according:4 leabra:1 conjugate:4 remain:4 slightly:1 ninety:1 smaller:2 beneficial:1 partitioned:1 sandholm:1 making:5 biologically:3 s1:4 castro:1 intuitively:1 outlier:1 pr:6 equation:3 remains:5 previously:1 mechanism:1 needed:1 know:1 excitement:1 ascending:2 reversal:1 available:1 observe:2 enforce:1 generic:1 indirectly:2 modularization:3 thomas:5 original:1 denotes:1 top:1 include:1 question:3 rt:17 dependence:1 interacts:1 diagonal:1 gradient:82 kth:1 thank:1 philip:1 considers:1 toward:1 assuming:1 kehoe:1 ratio:1 difficult:4 executed:1 frank:1 ncn:10 negative:1 policy:97 twenty:1 perform:7 neuron:9 markov:7 finite:1 descent:2 parietal:1 immediate:1 defining:2 neurobiology:3 gridworld:3 interacting:2 mansour:1 arbitrary:1 community:2 pair:1 required:2 ds0:3 acoustic:1 established:1 daw:1 able:2 bar:1 scott:2 eighth:1 memory:1 natural:36 force:1 abbreviate:1 advanced:1 scheme:1 mdps:1 naive:1 review:5 acknowledgement:1 loss:1 expect:2 multiagent:1 interesting:2 acyclic:1 approximator:1 foundation:1 agent:12 degree:1 s0:4 critic:34 share:1 balancing:1 compatible:4 summary:1 cooperation:1 repeat:1 keeping:1 jth:3 lef:2 allow:6 taking:6 benefit:1 distributed:1 feedback:1 world:1 transition:13 computes:9 forward:1 collection:1 reinforcement:13 made:1 adaptive:2 schultz:1 approximate:2 keep:1 global:8 instantiation:1 automatica:1 tuples:1 xi:18 search:1 continuous:1 why:1 reviewed:1 table:2 learn:1 nature:2 improving:1 necessarily:1 artificially:1 complex:1 da:1 did:1 crites:1 whole:1 s2:2 actorcritics:1 sridhar:1 ait:6 child:2 repeated:1 x1:1 axon:1 explicit:1 theorem:1 down:2 emphasizing:1 xt:1 specific:1 vac:21 decay:1 a3:14 sit:5 exists:2 consist:1 sequential:1 ci:3 phd:1 cartesian:1 easier:1 locality:4 depicted:2 broadcasting:1 explore:1 likely:1 ganglion:4 scalar:2 prisoner:1 chang:2 nested:1 satisfies:1 ma:3 conditional:1 identity:1 formulated:1 viewed:1 goal:2 replace:1 fisher:2 crick:1 change:3 diminished:1 infinite:1 uniformly:1 reducing:1 called:6 e:1 select:1 latter:1 arises:1 evaluate:1 requisite:1 tested:2 handling:1 |
3,810 | 445 | Human and Machine 'Quick Modeling'
Jakob Bernasconi
Asea Brown Boveri Ltd
Corporate Research
CH-5405 Baden,
SWITZERLAND
Karl Gustafson
University of Colorado
Department of Mathematics and
Optoelectronic Computing Center
Boulder, CO 80309
ABSTRACT
We present here an interesting experiment in 'quick modeling' by humans,
performed independently on small samples, in several languages and two
continents, over the last three years. Comparisons to decision tree procedures and neural net processing are given. From these, we conjecture that
human reasoning is better represented by the latter, but substantially different from both. Implications for the 'strong convergence hypothesis' between neural networks and machine learning are discussed, now expanded
to include human reasoning comparisons.
1
INTRODUCTION
Until recently the fields of symbolic and connectionist learning evolved separately.
Suddenly in the last two years a significant number of papers comparing the two
methodologies have appeared. A beginning synthesis of these two fields was forged
at the NIPS '90 Workshop #5 last year (Pratt and Norton, 1990), where one may
find a good bibliography of the recent work of Atlas, Dietterich, Omohundro, Sanger,
Shavlik, Tsoi, Utgoff and others.
It was at that NIPS '90 Workshop that we learned of these studies, most of which
concentrate on performance comparisons of decision tree algorithms (such as ID3,
CART) and neural net algorithms (such as Perceptrons, Backpropagation). Independently three years ago we had looked at Quinlan's ID3 scheme (Quinlan, 1984)
and intuitively and rather instantly not agreeing with the generalization he obtains
by ID3 from a sample of 8 items generalized to 12 items, we subjected this example
to a variety of human experiments. We report our findings, as compared to the
performance of ID3 and also to various neural net computations.
1151
1152
Bernasconi and Gustafson
Because our focus on humans was substantially different from most of the other
mentioned studies, we also briefly discuss some important related issues for further investigation. More details are given elsewhere (Bernasconi and Gustafson, to
appear).
2
THE EXPERIMENT
To illustrate his ID3 induction algorithm, Quinlan (1984) considers a set C consisting of 8 objects, with attributes height, hair, and eyes. The objects are described
in terms of their attribute values and classified into two classes, "+" and "-", respectively (see Table 1). The problem is to find a rule which correctly classifies all
objects in C, and which is in some sense minimal.
Table 1: The set C of objects in Quinlan's classification example.
Object
1
2
3
4
5
6
7
8
Height
(s)
(t)
(t)
(s)
(t)
(t)
(t)
(s)
short
tall
tall
short
tall
tall
tall
short
Hair
Eyes
Class
(b) blond
(b) blond
(r) red
(d) dark
(d) dark
(b) blond
(d) dark
(b) blond
(bl)
(br)
(bl)
(bl)
(bl)
(bl)
(br)
(br)
+
blue
brown
blue
blue
blue
blue
brown
brown
+
+
The ID3 algorithm uses an information-theoretic approach to construct a "minimal"
classification rule, in the form of a decision tree, which correctly classifies all objects
in the learning set C. In Figure 1, we show two possible decision trees which
correctly classify all 8 objects of the set C. Decision tree 1 is the one selected by
the ID3 algorithm. As can be seen, "Hair" as root of the tree classifies four of the
eight objects immediately. Decision tree 2 requires the same number of tests and
has the same number of branches, but "Eyes" as root classifies only three objects
at the first level of the tree.
Consider now how the decision trees of Figure 1 classify the remaining four possible
objects in the set complement C'. Table 2 shows that the two decision trees lead to
a different classification of the four objects of sample C'. We observe that the ID3preferred decision tree 1 places a large importance on the "red" attribute (which
occurs only in one object of sample C), while decision tree 2 puts much less emphasis
on this particular attribute.
Human and Machine 'Quick Modeling'
Decision tree 2
Decision tree 1
Figure 1: Two possible decision trees for the classification of sample C (Table 1)
Table 2: The set C' of the remaining four objects, and their classification by the
decision trees of Figure 1.
Object
9
10
11
12
3
Attribute
Values
s
s
s
t
d
r
r
r
br
bl
br
br
Classification
Tree 1 Tree 2
+
+
+
+
GENERALIZATIONS BY HUMANS AND NEURAL
NETS
Curious about these differences in the generalization behavior, we have asked some
humans (colleagues, graduate students, undergraduate students, some nonscientists
also) to "look" at the original sample C of 8 items, presented to them without
warning, and to "use" this information to classify the remaining 4 objects. Over
some time, we have accumulated a "human sample" of total size 73 from 3 continents
representing 14 languages. The results of this human generalization experiment are
summarized in Table 3. We observe that about 2/3 of the test persons generalized
in the same manner as decision tree 2, and that less than 10 percent arrived at the
generalization corresponding to the ID3-preferred decision tree 1.
1153
1154
Bernasconi and Gustafson
Table 3: Classification of objects 9 through 12 by Humans and by a Neural Net.
Based on a total sample of 73 humans. Each of the 4 contributing subsamples from
different languages and locations gave consistent percentages.
Object
Attribute
Values
9
s
d
br
10
s
r
bl
11
s
r
br
12
br
t r
Humans:
Neural Net:
A
B
+
+
+
+
65.8%
71.4%
Classification
D
C
+
8.2%
12.1%
4.1%
9.4%
E
+
+
+
9.6%
4.2%
Other
12.3%
2.9%
We also subjected this generalization problem to a variety of neural net computations. In particular, we analyzed a simple perceptron architecture with seven input
units representing a unary coding of the attribute values (i.e., a separate input unit
for each attribute value). The eight objects of sample C (Table 1) were used as
training examples, and we employed the perceptron learning procedure (Rumelhart
and McClelland, 1986) for a threshold output unit . In our initial experiment, the
starting weights were chosen randomly in (-1,1) and the learning parameter h (the
magnitude of the weight changes) was varied between 0.1 and 1. After training,
the net was asked to classify the unseen objects 9 to 12 of Table 2. Out of the 16
possible classifications of this four object test set, only 5 were realized by the neural
net (labelled A through E in Table 3). The percentage values given in Table 3
refer to a total of 9000 runs (3000 each for h
0.1, 0.5, and 1.0, respectively). As
can be seen, there is a remarkable correspondence between the solution profile of
the neural net computations and that of the human experiment.
=
4
BACKWARD PREDICTION
There exist many different rules which all correctly classify the given set C of 8
objects (Table 1), but which lead to a different generalization behavior, i.e., to a
different classification of the remaining objects 9 to 12 (see Tables 2 and 3). From
a formal point of view, all of the 16 possible classifications of objects 9 to 12 are
equally probable, so that no a priori criterion seems to exist to prefer one generalization over the other. We have nevertheless attempted to quantify the obviously
ill-defined notion of "meaningful generalization". To estimate the relative "quality"
of different classification rules, we propose to analyze the "backward prediction ability" of the respective generalizations . This is evaluated as follows. An appropriate
learning method (e.g., neural nets) is used to construct rules which explain a given
classification of objects 9 to 12, and these rules are applied to classify the initial
set C of 8 objects. The 16 possible generalizations can then be rated according to
their "backward prediction accuracy" with respect to the original classification of
Human and Machine 'Quick Modeling'
the sample C. We have performed a number of such calculations and consistently
found that the 5 generalizations chosen by the neural nets in the forward prediction
mode (cf. Table 3) have by far the highest backward prediction accuracy (on the
average between 5 and 6 correct classifications). Their negations ("+" exchanged
with "-") , on the other hand, predict only about 2 to 3 of the 8 original classifications correctly, while the remaining 6 possible generalizations all have a backward
prediction accuracy close to 50% (4 out of 8 correct) . These results, representing
averages over 1000 runs, are given in Table 4.
Table 4: Neural Net backward prediction accuracy for the different classifications
of objects 9 to 12.
Classification
of objects
9 10 11 12
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Backward prediction
accuracy (% )
76 .0
71.2
71.1
67.9
61.9
52.6
52.5
52.5
47.4
47.3
47.0
37.2
31.7
30.1
28.3
23.6
In addition to Neural Nets, we have also used the ID3 method to evaluate the backward predictive power of different generalizations. This method generates fewer
rules than the Neural Nets (often only a single one), but the resulting tables of
backward prediction accuracies all exhibit the same qualitative features . As examples, we show in Figure 2 the ID3 backward prediction trees for two different
generalizations, the ID3-preferred generalization which classifies the objects 9 to 12
as (- + ++), and the Human and Neural Net generalization (- + --). Both trees
have a backward prediction accuracy of 75% (provided that "blond hair" in tree (a)
is classified randomly).
1155
1156
Bernasconi and Gustafson
(b)
(a)
Figure 2: ID3 backward prediction trees, (a) for the ID3-preferred generalization
(- + ++), and (b) for the generalization preferred by Humans and Neural Nets,
(- + --)
The overall backward prediction accuracy is not the only quantity of interest in these
calculations. We can, for example, examine how well the original classification of an
individual object in the set C is reproduced by predicting backwards from a given
generalization.
Some examples of such backward prediction profiles are shown in Figure 3. From
both the ID3 and the Neural Net calculations, it is evident that the backward
prediction behavior of the Human and Neural Net generalization is much more
informative than that of the ID3-solution, even though the two solutions have almost
the same average backward prediction accuracy.
IDJ Backward Prediction:
Ib)
Object ,.
~eural ~et
Backward Prediction:
Ibl
1:1)
O.S
o
-0.>......,.,.....,.,.."..>..>.,.;>0..>....>..>....:....,."
123.l5678
ObJect"
Figure 3: Individual backward prediction probabilities for the ID3-preferred generalization [graphs (a)], and for the Human and Neural Net generalization [graphs
(b )].
Human and Machine 'Quick Modeling'
Finally, we have recently performed a Human backward prediction experiment.
These results are given in Table 5. Details will be given elsewhere (Bernasconi and
Gustafson, to appear). Note that the Backward Prediction results are commensurate with the Forward Prediction in both cases.
Table 5: Human backward predictions and accuracy from the two principal forward
generalizations A (Neural Nets, Humans) and B (ID3).
Object
Class
1
+
2
3
+
4
5
6
+
7
8
Humans:
Accuracy:
5
Backward
from A
Backward
from B
+
+
+
+
+
+
+
+
+
+
+
+
59%
75%
12%
100%
33%
75%
+
+
17%
75%
DISCUSSION AND CONCLUSIONS
Our basic conclusion from this experiment is that the "Strong Convergence Hypothesis" that Machine Learning and Neural Network algorithms are "close" can
be sharpened, with the two fields then better distinguished, by comparison to Human Modelling. From the experiment described here, we conjecture a "Stronger
Convergence Hypothesis" that Humans and Neural Nets are "closer."
Further conclusions related to minimal network size (re Pavel, Gluck, Henkle, 1989),
crossvalidation (see Weiss and Kulikowski, 1991), sharing over nodes.(as in Dietterich, Hild, Bakiri, to appear, and Atlas et al., 1990), and rule extracting (Shavlik
et al., to appear), will appear elsewhere (Bernasconi and Gustafson, to appear). Although we have other experiments on other test sets underway, it should be stressed
that our investigations especially toward Human comparisons are only preliminary
and should be viewed as a stimulus to further investigations.
ACKNOWLEDGEMENT
This work was partially supported by the NFP 23 program of the Swiss National
Science Foundation and by the US-NSF grant CDR8622236.
1157
1158
Bernasconi and Gustafson
REFERENCES
L. Y. Pratt .and S. W. Norton, "Neural Networks and Decision Tree Induction:
Exploring the Relationship Between Two Research Areas," NIPS '90 Workshop #5
Summary (1990), 7 pp.
J. Ross Quinlan, "Learning Efficient Classification Procedures and Their Application to Chess End Games," in Machine Learning: An Artificial Intelligence Approach, edited by R. S. Michalski, J. G. Carbonell, and T. M. Mitchell, SpringerVerlag, Berlin (1984), 463-482.
D. E. Rumelhart and J. L. McClelland (Eds.), Parallel Distributed Processing, Vol.
1 MIT Press, Cambridge, MA (1986).
J. Bernasconi and K. Gustafson, "Inductive Inference and Neural Nets," to appear.
J. Bernasconi and K. Gustafson, "Generalization by Humans, Neural Nets, and
ID3," IJCNN-91-Seattle.
Y. H. Pao, Adaptive Pattern Recognition and Neural Networks, Addison Wesley
(1989), Chapter 4.
M. Pavel, M. A. Gluck and V. Henkle, "Constraints on Adaptive Networks for
Modelling Human Generalization," in Advances in Neural Information Processing
Systems I, edited by D. Touretzky, Morgan Kaufmann, San Mateo, CA (1989),
2-10.
S. Weiss and C. Kulikowski, Computer Systems that Learn, Morgan Kaufmann
(1991).
T. G. Dietterich, H. HiId, and G. Bakiri, "A Comparison of ID3 and Backpropagation for English Text-to-Speech Mapping," Machine Learning, to appear.
L. Atlas, R. Cole, J. Connor, M. EI-Sharkawi, R. Marks, Y. Muthusamy, E. Barnard,
"Performance Comparisons Between Backpropagation Networks and Classification
Trees on Three Real-World Applications," in Advances in Neural Information Processing Systems 2, edited by D. Touretzky, Morgan Kaufmann (1990), 622-629.
J. Shavlik, R. Mooney, G. Towell, "Symbolic and Neural Learning Algorithms: An
Experimental Comparison (revised)," Machine Learning, (1991, to appear).
| 445 |@word briefly:1 seems:1 stronger:1 pavel:2 initial:2 comparing:1 informative:1 atlas:3 intelligence:1 selected:1 fewer:1 item:3 beginning:1 short:3 node:1 location:1 height:2 qualitative:1 manner:1 behavior:3 examine:1 provided:1 classifies:5 evolved:1 substantially:2 finding:1 warning:1 unit:3 grant:1 appear:9 emphasis:1 mateo:1 co:1 graduate:1 tsoi:1 backpropagation:3 swiss:1 procedure:3 area:1 symbolic:2 close:2 put:1 quick:5 center:1 starting:1 independently:2 immediately:1 rule:8 his:1 notion:1 colorado:1 us:1 hypothesis:3 rumelhart:2 recognition:1 highest:1 edited:3 mentioned:1 utgoff:1 asked:2 predictive:1 represented:1 various:1 chapter:1 artificial:1 ability:1 unseen:1 id3:19 reproduced:1 obviously:1 subsamples:1 net:24 michalski:1 propose:1 crossvalidation:1 seattle:1 convergence:3 object:32 tall:5 illustrate:1 strong:2 quantify:1 switzerland:1 concentrate:1 correct:2 attribute:8 human:30 generalization:26 investigation:3 preliminary:1 probable:1 exploring:1 hild:1 mapping:1 predict:1 ross:1 cole:1 mit:1 rather:1 focus:1 consistently:1 modelling:2 ibl:1 sense:1 inference:1 accumulated:1 unary:1 issue:1 classification:21 ill:1 overall:1 priori:1 field:3 construct:2 look:1 connectionist:1 others:1 report:1 stimulus:1 randomly:2 national:1 individual:2 consisting:1 negation:1 interest:1 analyzed:1 implication:1 closer:1 respective:1 tree:26 exchanged:1 re:1 minimal:3 classify:6 modeling:5 person:1 synthesis:1 sharpened:1 baden:1 student:2 summarized:1 coding:1 performed:3 root:2 view:1 analyze:1 red:2 parallel:1 accuracy:11 kaufmann:3 mooney:1 ago:1 classified:2 explain:1 touretzky:2 sharing:1 ed:1 norton:2 colleague:1 pp:1 henkle:2 mitchell:1 wesley:1 methodology:1 wei:2 evaluated:1 though:1 until:1 hand:1 ei:1 mode:1 quality:1 dietterich:3 brown:4 inductive:1 game:1 pao:1 criterion:1 generalized:2 arrived:1 evident:1 omohundro:1 continent:2 theoretic:1 percent:1 reasoning:2 recently:2 discussed:1 he:1 significant:1 refer:1 cambridge:1 connor:1 mathematics:1 language:3 had:1 recent:1 seen:2 morgan:3 employed:1 branch:1 corporate:1 calculation:3 equally:1 prediction:23 basic:1 hair:4 addition:1 separately:1 cart:1 extracting:1 curious:1 backwards:1 gustafson:10 pratt:2 muthusamy:1 variety:2 gave:1 architecture:1 br:9 ltd:1 speech:1 dark:3 mcclelland:2 percentage:2 exist:2 nsf:1 towell:1 correctly:5 instantly:1 blue:5 vol:1 four:5 threshold:1 nevertheless:1 backward:24 graph:2 year:4 run:2 place:1 almost:1 bernasconi:10 decision:17 prefer:1 correspondence:1 ijcnn:1 constraint:1 bibliography:1 generates:1 expanded:1 conjecture:2 department:1 according:1 agreeing:1 chess:1 intuitively:1 boulder:1 discus:1 addison:1 subjected:2 end:1 eight:2 observe:2 appropriate:1 optoelectronic:1 distinguished:1 original:4 remaining:5 include:1 cf:1 quinlan:5 kulikowski:2 sanger:1 especially:1 bakiri:2 suddenly:1 bl:7 realized:1 looked:1 occurs:1 quantity:1 forged:1 exhibit:1 separate:1 berlin:1 carbonell:1 seven:1 considers:1 toward:1 induction:2 relationship:1 revised:1 commensurate:1 varied:1 jakob:1 complement:1 learned:1 nip:3 pattern:1 appeared:1 program:1 power:1 predicting:1 representing:3 scheme:1 rated:1 eye:3 text:1 acknowledgement:1 contributing:1 relative:1 underway:1 interesting:1 remarkable:1 foundation:1 consistent:1 karl:1 elsewhere:3 summary:1 supported:1 last:3 idj:1 english:1 formal:1 perceptron:2 shavlik:3 nfp:1 distributed:1 world:1 forward:3 adaptive:2 san:1 far:1 obtains:1 preferred:5 table:19 learn:1 ca:1 profile:2 eural:1 ib:1 workshop:3 undergraduate:1 importance:1 magnitude:1 sharkawi:1 gluck:2 partially:1 ch:1 ma:1 viewed:1 labelled:1 barnard:1 change:1 springerverlag:1 principal:1 blond:5 total:3 experimental:1 attempted:1 meaningful:1 perceptrons:1 mark:1 latter:1 stressed:1 evaluate:1 |
3,811 | 4,450 | Multiclass Boosting: Theory and Algorithms
Mohammad J. Saberian
Statistical Visual Computing Laboratory,
University of California, San Diego
[email protected]
Nuno Vasconcelos
Statistical Visual Computing Laboratory,
University of California, San Diego
[email protected]
Abstract
The problem of multi-class boosting is considered. A new framework, based on
multi-dimensional codewords and predictors is introduced. The optimal set of
codewords is derived, and a margin enforcing loss proposed. The resulting risk is
minimized by gradient descent on a multidimensional functional space. Two algorithms are proposed: 1) CD-MCBoost, based on coordinate descent, updates one
predictor component at a time, 2) GD-MCBoost, based on gradient descent, updates all components jointly. The algorithms differ in the weak learners that they
support but are both shown to be 1) Bayes consistent, 2) margin enforcing, and
3) convergent to the global minimum of the risk. They also reduce to AdaBoost
when there are only two classes. Experiments show that both methods outperform
previous multiclass boosting approaches on a number of datasets.
1
Introduction
Boosting is a popular approach to classifier design in machine learning. It is a simple and effective
procedure to combine many weak learners into a strong classifier. However, most existing boosting
methods were designed primarily for binary classification. In many cases, the extension to M ary problems (of M > 2) is not straightforward. Nevertheless, the design of multi-class boosting
algorithms has been investigated since the introduction of AdaBoost in [8].
Two main approaches have been attempted. The first is to reduce the multiclass problem to a collection of binary sub-problems. Methods in this class include the popular ?one vs all? approach, or
methods such as ?all pairs?, ECOC [4, 1], AdaBoost-M2 [7], AdaBoost-MR [18] and AdaBoostMH [18, 9]. The binary reduction can have various problems, including 1) increased complexity, 2)
lack of guarantees of an optimal joint predictor, 3) reliance on data representations, such as adding
one extra dimension that includes class numbers to each data point [18, 9], that may not necessarily
enable effective binary discrimination, or 4) using binary boosting scores that do not represent true
class probabilities [15]. The second approach is to boost an M -ary classifier directly, using multiclass weak learners, such as trees. Methods of this type include AdaBoost-M1[7], SAMME[12] and
AdaBoost-Cost [16]. These methods require strong weak learners which substantially increase complexity and have high potential for overfitting. This is particularly problematic because, although
there is a unified view of these methods under the game theory interpretation of boosting [16], none
of them has been shown to maximize the multiclass margin. Overall, the problem of optimal and
efficient M -ary boosting is still not as well understood as its binary counterpart.
In this work, we introduce a new formulation of multi-class boosting, based on 1) an alternative
definition of the margin for M -ary problems, 2) a new loss function, 3) an optimal set of codewords,
and 4) the statistical view of boosting, which leads to a convex optimization problem in a multidimensional functional space. We propose two algorithms to solve this optimization: CD-MCBoost,
which is a functional coordinate descent procedure, and GD-MCBoost, which implements functional
gradient descent. The two algorithms differ in terms of the strategy used to update the multidimensional predictor. CD-MCBoost supports any type of weak learners, updating one component of
the predictor per boosting iteration, GD-MCBoost requires multiclass weak learners but updates all
1
components simultaneously. Both methods directly optimize the predictor of the multiclass problem
and are shown to be 1) Bayes consistent, 2) margin enforcing, and 3) convergent to the global minimum of the classification risk. They also reduce to AdaBoost for binary problems. Experiments
show that they outperform comparable prior methods on a number of datasets.
2
Multiclass boosting
We start by reviewing the fundamental ideas behind the classical use of boosting for the design of
binary classifiers, and then extend these ideas to the multiclass setting.
2.1
Binary classification
A binary classifier, F (x), is a mapping from examples x ? X to class labels y ? {?1, 1}. The
optimal classifier, in the minimum probability of error sense, is Bayes decision rule
F (x) = arg miny?{?1,1} PY |X (y|x).
(1)
This can be hard to implement, due to the difficulty of estimating the probabilities PY |X (y|x). This
difficulty is avoided by large margin methods, such as boosting, which implement the classifier as
F (x) = sign[f ? (x)]
(2)
where f ? (x) : X ? R is the continuous valued predictor
f ? (x) = arg min R(f )
(3)
R(f ) = EX,Y {L[y, f (x)]}
(4)
f
that minimizes the classification risk
associated with a loss function L[., .]. In practice, the optimal predictor is learned from a sample
D = {(xi , yi )}ni=1 of training examples, and (4) is approximated by the empirical risk
R(f ) ?
n
L[yi , f (xi )].
(5)
i=1
The loss L[., .] is said to be Bayes consistent if (1) and (2) are equivalent. For large margin methods,
such as boosting, the loss is also a function of the classification margin yf (x), i.e.
L[y, f (x)] = ?(yf (x))
(6)
for some non-negative function ?(.). This dependence on the margin yf (x) guarantees that the
classifier has good generalization when the training sample is small [19]. Boosting learns the
optimal predictor f ? (x) : X ? R as the solution of
minf (x) R(f )
(7)
s.t
f (x) ? span(H).
where H = {h1 (x), ...hp (x)} is a set of weak learners hi (x) : X ? R, and the optimization is
carried out by gradient descent in the functional space span(H) of linear combinations of hi (x) [14].
2.2
Multiclass setting
To extend the above formulation to the multiclass setting, we note that the definition of the classification labels as ?1 plays a significant role in the formulation of the binary case. One of the difficulties
of the multiclass extension is that these labels do not have an immediate extension to the multiclass
setting. To address this problem, we return to the classical setting, where the class labels of a M -ary
problem take values in the set {1, . . . , M }. Each class k is then mapped into a distinct class label
y k , which can be thought of as a codeword that identifies the class.
In the binary case, these codewords are defined as y 1 = 1 and y 2 = ?1. It is possible to derive
an alternative form for the expressions of the margin and classifier F (x) that depends explicitly on
codewords. For this, we note that (2) can be written as
F (x) = arg max y k f ? (x)
k
2
(8)
and the margin can be expressed as
f
if k = 1
yf =
=
?f if k = 2
1 1
2 (y f
1 2
2 (y f
? y2 f )
? y1 f )
1
if k = 1
= (y k f ? max y l f ).
if k = 2
l=k
2
(9)
The interesting property of these forms is that they are directly extensible to the M -ary classification
case. For this, we assume that the codewords y k and the predictor f (x) are multi-dimensional, i.e.
y k , f (x) ? Rd for some dimension d which we will discuss in greater detail in the following section.
The margin of f (x) with respect to class k is then defined as
M(f (x), y k ) =
1
[< f (x), y k > ? max < f (x), y l >]
l=k
2
(10)
and the classifier as
F (x) = arg maxk < f (x), y k >,
(11)
where < ., . > is the standard dot-product. Note that this is equivalent to
F (x) = arg
max
k?{1,...,M }
M(f (x), y k ),
(12)
and thus F (x) is the class of largest margin for the predictor f (x). This definition is closely related to
previous notions of multiclass margin. For example, it generalizes that of [11], where the codewords
y k are restricted to the binary vectors in the canonical basis of Rd , and is a special case of that in
[1], where the dot products < f (x), y k > are replaced by a generic function of f, x, and k. Given a
training sample D = {(xi , yi )}ni=1 , the optimal predictor f ? (x) minimizes the risk
RM (f ) = EX,Y {LM [y, f (x)]} ?
n
LM [yi , f (xi )]}
(13)
i=1
where LM [., .] is a multiclass loss function. A natural extension of (6) and (9) is a loss of the form
LM [y, f (x)] = ?(M(f (x), y)).
(14)
To avoid the nonlinearity of the max operator in (10), we rely on
LM [y, f (x)] =
M
1
e? 2 [<f (x),y>?<f (x),y
k
>]
.
(15)
k=1
which is shown, in Appendix A, to upper bound 1 + e?M(f (x),y) . It follows that the minimization of
the risk of (13) encourages predictors of large margin M(f ? (xi ), yi ), ?i. For M = 2, LM [y, f (x)]
reduces to
L2 [y, f (x)] = 1 + e?yf (x)
(16)
and the risk minimization problem is identical to that of AdaBoost [8]. In appendices B and C it
is shown that RM (f ) is convex and Bayes consistent, in the sense that if f ? (x) is the minimizer of
(13), then
< f ? (x), y k >= log PY |X (y k |x) + c ?k
(17)
and (11) implements the Bayes decision rule
F (x) = arg maxk PY |X (y k |x).
2.3
(18)
Optimal set of codewords
From (15), the choice of codewords y k has an impact in the optimal predictor f ? (x), which is
determined by the projections < f ? (x), y k >. To maximize the margins of (10), the difference
between these projections should be as large as possible. To accomplish this we search for the set of
M distinct unit codewords Y = {y 1 , . . . , y M } ? Rd that are as dissimilar as possible
?
maxd,y1 ,...yM [mini=j ||y i ? y j ||2 ]
?
?
(19)
k
|| = 1 ?k = 1..M.
?
? s.t ||y
k
d
y ? R ?k = 1..M.
3
1.5
1.5
1
1
05
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-1.5
-1.5
-1
-0.5
0
0.5
1
1.5
(M = 2)
-1.5
-1.5
1
0
-1
1
-1
-0.5
0
0.5
1
1.5
(M = 3)
0
-1
-1
0
1
(M = 4)
Figure 1: Optimal codewords for M = 2, 3, 4.
To solve this problem,we start by noting that, for d < M , the smallest distance of (19) can be
increased by simply increasing d, since this leads to a larger space. On the other hand, since M
points y 1 , ...y M lie in an, at most, M ? 1 dimensional subspace of Rd , e.g. any three points belong
to a plane, there is no benefit in increasing d beyond M ? 1. On the contrary, as shown in Appendix
D, if d > M ? 1 there exits a vector v ? Rd with equal projection on all codewords,
< y i , v >=< y j , v >
?i, j = 1, .., M.
(20)
Since the addition of v to the predictor f (x) does not change the classification rule of (11), this makes
the optimal predictor underdetermined. To avoid this problem, we set d = M ? 1. In this case, as
shown in Appendix E, the vertices of a M ?1 dimensional regular simplex1 centered at the origin [3]
are solutions of (19). Figure 1 presents the set of optimal codewords when M = 2, 3, 4. Note that
in the binary case this set consists of the traditional codewords yi ? {+1, ?1}. In general, there is
no closed form solution for the vertices of a regular simplex of M vectors. However, these can be
derived from those of a regular simplex of M ? 1 vectors, and a recursive solution is possible [3].
3
Risk minimization
We have so far defined a proper margin loss function for M -ary classification and identified an
optimal codebook. In this section, we derive two boosting algorithms for the minimization of the
classification risk of (13). These algorithms are both based on the GradientBoost framework [14].
The first is a functional coordinate descent algorithm, which updates a single component of the
predictor per boosting iteration. The second is a functional gradient descent algorithm that updates
all components simultaneously.
3.1
Coordinate descent
?
In the first method, each component fi? (x) of the optimal predictor f ? (x) = [f1? (x), ..fM
?1 (x)], is
the linear combination of weak learners that solves the optimization problem
minf1 (x),...,fM ?1 (x) R([f1 (x), ..., fM ?1 (x)])
(21)
s.t
fj (x) ? span(H) ?j = 1..M ? 1.
where H = {h1 (x), ...hp (x)} is a set of weak learners, hi (x) : X ? R. These can be
stumps, regression trees, or member of any other suitable model family. We denote by f t (x) =
t
[f1t (x), ..., fM
?1 (x)] the predictor available after t boosting iterations. At iteration t + 1 a single
component fj (x) of f (x) is updated with a step in the direction of the scalar functional g that most
t
decreases the risk R[f1t , ..., fjt + ?j? g, ..., fM
?1 ]. For this, we consider the functional derivative of
R[f (x)] along the direction of the functional g : X ? R, at point f (x) = f t (x), with respect to the
j th component fj (x) of f (x) [10],
?R[f t + g1j ]
t
?R[f ; j, g] =
,
(22)
?
=0
1
A regular M ? 1 dimensional simplex is the convex hull of M normal vectors which have equal pair-wise
distances.
4
where 1j ? Rd is a vector whose j th element is one and the remainder zero, i.e. f t + g1j =
t
[f1t , .., fjt + g, ..fM
?1 ]. Using the risk of (13), it is shown in Appendix F that
??R[f t ; j, g]
n
=
g(xi )wij ,
(23)
i=1
with
t
k
1
1 ? 1 <f t (xi ),yi >
< 1j , yi ? y k > e 2 <f (xi ),y > .
e 2
2
M
wij =
(24)
k=1
The direction of greatest risk decrease is the weak learner
n
gj? (x) = arg max
g(xi )wij ,
g?H
(25)
i=1
and the optimal step size along this direction
?j? = arg min R[f t (x) + ?gj? (x)1j ].
(26)
??R
The classifier is thus updated as
t
f t+1 = f t (x) + ?j? gj? (x)1j = [f1t , ..., fjt + ?j? gj? , ..., fM
?1 ]
(27)
0
This procedure is summarized in Algorithm 1-left and denoted CD-MCBoost. It starts with f (x) =
0 ? Rd and updates the predictor components sequentially. Note that, since (13) is a convex function
of f (x), it converges to the global minimum of the risk.
3.2
Gradient descent
Alternatively, (13) can be minimized by learning a linear combination of multiclass weak learners.
In this case, the optimization problem is
minf (x) R[f (x)]
(28)
s.t
f (x) ? span(H),
where H = {h1 (x), ..., hp (x)} is a set of multiclass weak learners, hi (x) : X ? RM ?1 , such as
decision trees. Note that to fit tree classifiers in this definition their output (usually a class number)
should be translated into a class codeword. As before, let f t (x) ? RM ?1 be the predictor available
after t boosting iterations. At iteration t + 1 a step is given along the direction g(x) ? H of largest
decrease of the risk R[f (x)]. For this, we consider the directional functional derivative of R[f (x)]
along the direction of the functional g : X ? RM ?1 , at point f (x) = f t (x).
?R[f t + g]
t
?R[f ; g] =
.
(29)
?
=0
As shown in Appendix G,
??R[f t ; g]
=
n
< g(xi ), wi >
(30)
i=1
where wi ? RM ?1
t
k
1
1 ? 1 <f t (xi ),yi >
e 2
(yi ? y k )e 2 <f (xi ),y > .
2
M
wi
=
(31)
k=1
The direction of greatest risk decrease is the weak learner
n
< g(xi ), wi >,
g ? (x) = arg max
g?H
(32)
i=1
and the optimal step size along this direction
?? = arg min R[f t (x) + ?g ? (x)].
??R
(33)
The predictor is updated to f t+1 (x) = f t (x)+?? g ? (x). This procedure is summarised in Algorithm
1-right, and denoted GD-MCBoost. Since (13) is convex, it converges to the global minimum of the
risk.
5
Algorithm 1 CD-MCBoost and GD-MCBoost
Input: Number of classes M , set of codewords Y = {y 1 , . . . , y M }, number of iterations N and
dataset S = {(x1 , y1 ), ..., (xn , yn )}, where xi are examples and yi ? Y are their class codewords.
Initialization: set t = 0, and f t = 0 ? RM ?1
GD-MCBoost
CD-MCBoost
while t < N do
for j = 1 to M ? 1 do
Compute wij with (24)
Find gj? (x), ?j? using (25) and (26)
Update fjt+1 (x) = fjt (x) + ?j? gj? (x)
Update fkt+1 (x) = fkt (x) ?k = j
t=t+1
end for
end while
while t < N do
Compute wi with (31)
Find g ? (x), ?? using (32) and (33)
Update f t+1 (x) = f t (x) + ?? g ? (x)
t=t+1
end while
Output: decision rule: F (x) = arg maxk < f N (x), y k >
4
Comparison to previous methods
Multi-dimensional predictors and codewords have been used implicitly, [7, 18, 16, 6], or explicitly,
[12, 9], in all previous multiclass boosting methods.
?one vs all?, ?all pairs? and ?ECOC? [1]: as shown in [1], these methods can be interpreted
l
as assigning a codeword y k to each class, where y k ? {+1, 0, ?1} and l = M for ?one vs all?,
M (M ?1)
for ?all pairs? and l is variable for ?ECOC?, depending on the error correction code. In
l=
2
all these methods, binary classifiers are learned independently for each of the codeword components.
This does not guarantee an optimal joint predictor. These methods are similar to CD-MCBoost in the
sense that the predictor components are updated individually at each boosting iteration. However,
in CD-MCBoost, the codewords are not restricted to {+1, 0, ?1} and the predictor components are
learned jointly.
AdaBoost-MH [18, 9]: This method converts the M -ary classification problem into a binary one,
learned from a M times larger training set, where each example x is augmented with a feature y that
identifies a class. Examples such that x belongs to class y receive binary label 1, while the remaining
receive the label ?1 [9]. In this way, the binary classifier learns if the multiclass label y is correct
for x or not. AdaBoost-MH uses weak learners ht : X ? {1, . . . , M } ? R and the decision rule
F? (x) = arg max
ht (x, j)
(34)
j?{1,2,..M }
t
where t is the iteration number. This is equivalent to
the decision rule of (11) if f (x) is an M dimensional predictor with j th component fj (x) = t ht (x, j), and the label codewords are defined as y j = 1j . This method is comparable to CD-MCBoost in the sense that it does not require
multiclass weak learners. However, there are no guarantees that the weak learners in common use
are able to discriminate the complex classes of the augmented binary problem.
AdaBoost-M1 [7] and AdaBoost-Cost [16]: These methods use multiclass weak learners ht :
X ? {1, 2, ..M } and a classification rule of the form
F? (x) = arg max
?t ht (x),
(35)
j?{1,2,..M }
t|ht (x)=j
where t is the boosting iteration and ?t the coefficient of weak learner ht (x). This is equivalent
th
to
the decision rule of (11) if f (x) is an jM -dimensional predictor with j component fj (x) =
t|ht (x)=j ?t ht (x) and label codewords y = 1j . These methods are comparable to GD-MCBoost,
in the sense that they update the predictor components simultaneously. However, they have not been
shown to be Bayes consistent, and it is not clear that they can be interpreted as maximizing the
multiclass margin.
6
4
4
1
y
2
y
3
y
2
f2(x)
f2(x)
0.5
class 1
class 2
class 3
y1
y2
3
y
0
0
2
f2(x)
1
?2
?0.5
0
Class 1
Class 2
Class 3
y1
?2
y2
y3
?1
?1
?0.5
0
0.5
1
1.5
?4
?2
?1
0
1
2
3
?4
?2
?1
f1(x)
f (x)
1
t=0
0
1
2
f1(x)
t = 10
t = 100
Figure 2: Classifier predictions of CD-MCBoost, on the test set, after t = 0, 10, 100 boosting iterations.
SAMME [12]: This method explicitly uses M -dimensional predictors with codewords
?1
M 1j ? 1
?1
?1
?1
yj =
=
,
, ..., 1,
,
? RM ,
M ?1
M ?1 M ?1
M ?1 M ?1
and decision rule
F? (x) = arg
max
j?{1,2,..M }
fj (x).
(36)
(37)
Since, as discussed in Section 2.3, the optimal detector is not unique when the predictor is M M
dimensional, this algorithm includes the additional constraint j=1 fj (x) = 0 and solves a constrained optimization problem [12, 9]. It is comparable to GD-MCBoost in the sense that it updates the predictor components simultaneously, but uses the loss function LSAM M E [y k , f (x)] =
k
1
e? M <y ,f (x)> . Using (36), the minimization of this loss is equivalent to maximizing
1
M (f (x), y k ) =< f (x), y k >= fk (x) ?
fj (x),
(38)
M ?1
j=k
which is not a proper margin since M (f (x), y ) > 0 does not imply correct classification i.e.
fk (x) > fj (x) ?j = k. Hence, SAMME does not guarantee a large margin solution for the
multiclass problem.
k
When compared to all these methods, MCBoost has the advantage of combining 1) a Bayes consistent and margin enforcing loss function, 2) an optimal set of codewords, 3) the ability to boost any
type of weak learner, 4) guaranteed convergence to the global minimum of (21), for CD-MCBoost, or
(28), for GD-MCBoost, and 5) equivalence to the classical AdaBoost algorithm for binary problems.
It is worth emphasizing that MCBoost can boost any type of weak learners of non-zero directional
derivative, i.e. non-zero (23) for CD-MCBoost and (30) for GD-MCBoost. This is independent
of the type of weak learner output, and unlike previous multiclass boosting approaches, which can
only boost weak learners of specific output types. Note that, although the weak learner selection
criteria of previous approaches can have interesting interpretations, e.g. based on weighted error
rates [16], these only hold for specific weak learners. Finally, MCBoost extends the definition of
margin and loss function to multi-dimensional predictors. The derivation of Section 2 can easily be
generalized to the design of other multiclass boosting algorithms by the use of 1) alternative ?(v)
functions in (14) (e.g. those of the logistic [9] or Tangent [13] losses for increased outlier robustness,
asymmetric losses for cost-sensitive classification, etc.), and 2) alternative optimization approaches
(e.g. Newton?s method [9, 17]).
5
Evaluation
A number of experiments were conducted to evaluate the MCBoost algorithms2 .
5.1
Synthetic data
We start with a synthetic example, for which the optimal decision rule is known. This is a three class
problem, with two-dimensional Gaussian classes of means [1, 2], [?1, 0], [2, ?1] and covariances of
2
Codes for CD-MCBoost and GD-MCBoost are available from [2].
7
3
Table 1: Accuracy of multiclass boosting methods, using decision stumps, on six UCI data sets
method
One Vs All
AdaBoost-MH [18]
CD-MCBoost
landsat
84.80%
47.70%
85.70%
letter
50.92%
15.73%
49.60%
pendigit
86.56%
24.41%
89.51%
optdigit
89.93%
73.62%
92.82%
shuttle
87.11%
79.16%
88.01%
isolet
88.97%
66.71%
91.02%
Table 2: Accuracy of multiclass boosting methods, using trees of max depth 2, on six UCI data sets
method
AdaBoost-M1[7]
AdaBoost-SAMME[12]
AdaBoost-Cost [16]
GD-MCBoost
landsat
72.85%
79.80%
83.95%
86.65%
letter
?
45.65%
42.00%
59.65%
pendigit
?
83.82%
80.53%
92.94%
optdigit
?
87.53%
86.20%
92.32%
shuttle
96.45%
99.70%
99.55%
99.73%
isolet
?
61.00%
63.69%
84.28%
[1, 0.5; 0.5, 2],[1, 0.3; 0.3, 1],[.4, 0.1; 0.1, 0.8] respectively. Training and test sets of 1, 000 examples
each were randomly sampled and the Bayes rule computed in closed form [5]. The associated Bayes
error rate was 11.67% in the training and 11.13% in the test set. A classifier was learned with
CD-MCBoost and decision stumps.
Figure 2) shows predictions3 of f t (x) on the test set, for t = 0, 10, 100. Note that f 0 (xi ) = [0, 0]
for all examples xi . However, as the iterations proceed, CD-MCBoost produces predictions that are
more aligned with the true class codewords, shown as dashed lines, while maximizing the distance
between examples of different classes (by increasing their distance to the origin). In this context,
?alignment of f (x) with y k ? implies that < f (x), y k >?< f (x), y j >, ?j = k. This combination
of alignment and distance maximization results in higher margins, leading to more accurate and
robust classification. The test error rate after 100 iterations of boosting was 11.30%, and very close
to the Bayes error rate of 11.13%.
5.2
CD-MCBoost
We next conducted a number of experiments to evaluate the performance of CD-MCBoost on the
six UCI datasets of Table 1. Among the methods identified as comparable in the previous section,
we implemented ?one vs all? and AdaBoost-MH [18]. In all cases, decision stumps were used
as weak learners, and we used the training/test set decomposition specified for each dataset. The
?one vs all? detectors were trained with 20 iterations. The remaining methods were then allowed
to include the same number of weak learners in their final decision rules. Table 1 presents the
resulting classification accuracies. CD-MCBoost produced the most accurate classifier in four of
the five datasets, and was a close second in the remaining one. ?One vs all? achieved the next best
performance, with AdaBoost-MH producing the worst classifiers.
5.3
GD-MCBoost
Finally, the performance of GD-MCBoost was compared to AdaBoost-M1 [7], AdaBoost-Cost [16]
and AdaBoost-SAMME [12]. The experiments were based on the UCI datasets of the previous section, but the weak learners were now trees of depth 2. These were built with a greedy procedure
so as to 1) minimize the weighted error rate of AdaBoost-M1 [7] and AdaBoost-SAMME[12], 2)
minimize the classification cost of AdaBoost-Cost [16], or 3) maximize (32) for GD-MCBoost. Table 2 presents the classification accuracy of each method, for 50 training iterations. GD-MCBoost
achieved the best accuracy on all datasets, reaching substantially larger classification rate than all
other methods in the most difficult datasets, e.g. from a previous best of 63.69% to 84.28% in
isolet, 45.65% to 59.65% in letter, and 83.82% to 92.94% in pendigit. Among the remaining methods, AdaBoost-SAMME achieved the next best performance, although this was close to that of
AdaBoost-Cost. AdaBoost-M1 had the worst results, and was not able to boost the weak learners
used in this experiment for four of the six datasets. It should be noted that the results of Tables 1 and
2 are not directly comparable, since the classifiers are based on different types of weak learners and
have different complexities.
3
We emphasize the fact that these are plots of f t (x) ? R2 , not x ? R2 .
8
References
[1] E. L. Allwein, R. E. Schapire, and Y. Singer. Reducing multiclass to binary: a unifying approach for
margin classifiers. J. Mach. Learn. Res., 1:113?141, September 2001.
[2] N. N. Author. Suppressed for anonymity.
[3] H. S. M. Coxeter. Regular Polytopes. Dover Publications, 1973.
[4] T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes.
Journal of Artificial Intelligence Research, 2:263?286, 1995.
[5] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. Wiley, New York, 2. edition, 2001.
[6] G. Eibl and R. Schapire. Multiclass boosting for weak classifiers. In Journal of Machine Learning
Research, pages 6?189, 2005.
[7] Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In Proceedings of the Thirteenth International Conference In Machine Learning, pages 148?156, 1996.
[8] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application
to boosting. Journal of Comp. and Sys. Science, 1997.
[9] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting.
Annals of Statistics, 28, 1998.
[10] B. A. Frigyik, S. Srivastava, and M. R. Gupta. An introduction to functional derivatives. Technical
Report(University of Washington), 2008.
[11] Y. Guermeur. Vc theory of large margin multi-category classifiers. J. Mach. Learn. Res., 8:2551?2594,
December 2007.
[12] S. R. Ji Zhu, Hui Zou and T. Hastie. Multi-class adaboost. Statistics and Its Interface, 2:349?3660, 2009.
[13] H. Masnadi-Shirazi, N. Vasconcelos, and V. Mahadevan. On the design of robust classifiers for computer
vision. In CVPR, 2010.
[14] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent. In NIPS, 2000.
[15] D. Mease and A. Wyner. Evidence contrary to the statistical view of boosting. J. Mach. Learn. Res.,
9:131?156, June 2008.
[16] I. Mukherjee and R. E. Schapire. A theory of multiclass boosting. In NIPS, 2010.
[17] M. J. Saberian, H. Masnadi-Shirazi, and N. Vasconcelos. Taylorboost: First and second order boosting
algorithms with explicit margin control. In CVPR, 2010.
[18] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Mach.
Learn., 37:297?336, December 1999.
[19] V. N. Vapnik. Statistical Learning Theory. John Wiley Sons Inc, 1998.
9
| 4450 |@word duda:1 covariance:1 decomposition:1 frigyik:1 reduction:1 score:1 existing:1 assigning:1 written:1 john:1 additive:1 designed:1 plot:1 update:12 v:7 discrimination:1 greedy:1 intelligence:1 plane:1 sys:1 dover:1 boosting:39 codebook:1 five:1 along:5 consists:1 combine:1 introduce:1 multi:9 ecoc:3 jm:1 increasing:3 estimating:1 interpreted:2 substantially:2 minimizes:2 unified:1 guarantee:5 y3:1 multidimensional:3 classifier:23 rm:8 control:1 unit:1 yn:1 producing:1 before:1 understood:1 mach:4 f1t:4 initialization:1 equivalence:1 unique:1 yj:1 practice:1 recursive:1 implement:4 procedure:5 empirical:1 thought:1 projection:3 confidence:1 regular:5 close:3 selection:1 operator:1 risk:17 context:1 py:4 optimize:1 equivalent:5 maximizing:3 straightforward:1 independently:1 convex:5 correcting:1 m2:1 rule:12 isolet:3 notion:1 coordinate:4 updated:4 annals:1 diego:2 play:1 us:3 origin:2 element:1 approximated:1 particularly:1 updating:1 anonymity:1 asymmetric:1 mukherjee:1 role:1 worst:2 decrease:4 complexity:3 miny:1 saberian:3 trained:1 reviewing:1 solving:1 pendigit:3 exit:1 learner:29 basis:1 f2:3 translated:1 easily:1 joint:2 mh:5 various:1 derivation:1 distinct:2 effective:2 artificial:1 whose:1 larger:3 solve:2 valued:1 cvpr:2 ability:1 statistic:2 jointly:2 final:1 advantage:1 propose:1 product:2 remainder:1 uci:4 combining:1 aligned:1 convergence:1 produce:1 converges:2 derive:2 gradientboost:1 depending:1 frean:1 ex:2 solves:2 strong:2 implemented:1 implies:1 differ:2 direction:8 closely:1 correct:2 hull:1 vc:1 centered:1 enable:1 require:2 f1:4 generalization:2 underdetermined:1 extension:4 correction:1 hold:1 considered:1 normal:1 mapping:1 lm:6 smallest:1 label:10 sensitive:1 individually:1 largest:2 weighted:2 minimization:5 gaussian:1 reaching:1 avoid:2 shuttle:2 allwein:1 publication:1 derived:2 june:1 sense:6 landsat:2 wij:4 overall:1 classification:20 arg:14 among:2 denoted:2 constrained:1 special:1 equal:2 vasconcelos:3 washington:1 identical:1 minf:2 minimized:2 simplex:3 report:1 primarily:1 masnadi:2 randomly:1 simultaneously:4 replaced:1 friedman:1 evaluation:1 alignment:2 behind:1 accurate:2 tree:6 re:3 increased:3 extensible:1 maximization:1 cost:8 vertex:2 predictor:33 conducted:2 accomplish:1 synthetic:2 gd:16 fundamental:1 international:1 ym:1 derivative:4 leading:1 return:1 potential:1 stump:4 summarized:1 includes:2 coefficient:1 inc:1 explicitly:3 depends:1 view:4 h1:3 closed:2 start:4 bayes:11 minimize:2 ni:2 accuracy:5 directional:2 weak:30 produced:1 none:1 worth:1 comp:1 ary:8 detector:2 definition:5 nuno:2 associated:2 sampled:1 dataset:2 popular:2 higher:1 adaboost:29 improved:1 formulation:3 hand:1 lack:1 logistic:2 yf:5 shirazi:2 dietterich:1 true:2 y2:3 counterpart:1 hence:1 laboratory:2 game:1 encourages:1 noted:1 criterion:1 generalized:1 theoretic:1 mohammad:1 g1j:2 interface:1 fj:9 wise:1 fi:1 common:1 functional:13 ji:1 stork:1 extend:2 interpretation:2 m1:6 belong:1 discussed:1 optdigit:2 significant:1 rd:7 fk:2 hp:3 nonlinearity:1 had:1 dot:2 gj:6 etc:1 belongs:1 codeword:4 binary:21 samme:7 maxd:1 yi:11 minimum:6 greater:1 additional:1 mr:1 maximize:3 dashed:1 reduces:1 technical:1 hart:1 impact:1 prediction:3 regression:2 vision:1 iteration:15 represent:1 achieved:3 receive:2 addition:1 thirteenth:1 extra:1 unlike:1 member:1 contrary:2 december:2 noting:1 mahadevan:1 baxter:1 fit:1 hastie:2 identified:2 fm:7 reduce:3 idea:2 multiclass:31 expression:1 six:4 bartlett:1 proceed:1 york:1 clear:1 category:1 schapire:6 outperform:2 problematic:1 canonical:1 sign:1 per:2 tibshirani:1 summarised:1 four:2 reliance:1 nevertheless:1 ht:9 convert:1 letter:3 extends:1 family:1 decision:14 appendix:6 comparable:6 bound:1 hi:4 guaranteed:1 convergent:2 constraint:1 min:3 span:4 guermeur:1 combination:4 son:1 suppressed:1 wi:5 outlier:1 restricted:2 discus:1 singer:2 end:3 generalizes:1 available:3 generic:1 alternative:4 robustness:1 remaining:4 include:3 newton:1 unifying:1 bakiri:1 classical:3 codewords:23 strategy:1 coxeter:1 dependence:1 traditional:1 said:1 september:1 gradient:7 subspace:1 distance:5 mapped:1 enforcing:4 code:3 mini:1 difficult:1 negative:1 design:5 proper:2 fjt:5 upper:1 datasets:8 descent:11 immediate:1 maxk:3 y1:5 ucsd:2 introduced:1 pair:4 specified:1 california:2 learned:5 polytopes:1 boost:5 nip:2 address:1 beyond:1 able:2 usually:1 pattern:1 built:1 including:1 max:11 greatest:2 suitable:1 difficulty:3 natural:1 rely:1 mcboost:39 zhu:1 rated:1 wyner:1 imply:1 identifies:2 carried:1 prior:1 l2:1 tangent:1 freund:2 loss:14 interesting:2 consistent:6 cd:19 algorithms2:1 benefit:1 dimension:2 xn:1 depth:2 author:1 collection:1 san:2 avoided:1 far:1 emphasize:1 implicitly:1 global:5 overfitting:1 sequentially:1 xi:16 alternatively:1 continuous:1 search:1 table:6 learn:4 robust:2 investigated:1 necessarily:1 complex:1 zou:1 main:1 edition:1 allowed:1 x1:1 augmented:2 mease:1 wiley:2 sub:1 explicit:1 lie:1 learns:2 emphasizing:1 specific:2 r2:2 mason:1 gupta:1 evidence:1 vapnik:1 adding:1 hui:1 margin:26 simply:1 visual:2 expressed:1 fkt:2 scalar:1 minimizer:1 hard:1 change:1 determined:1 reducing:1 discriminate:1 attempted:1 support:2 dissimilar:1 evaluate:2 srivastava:1 |
3,812 | 4,451 | Understanding the Intrinsic Memorability of Images
Phillip Isola
MIT
Devi Parikh
TTI-Chicago
Antonio Torralba
MIT
Aude Oliva
MIT
[email protected]
[email protected]
[email protected]
[email protected]
Abstract
Artists, advertisers, and photographers are routinely presented with the task of
creating an image that a viewer will remember. While it may seem like image
memorability is purely subjective, recent work shows that it is not an inexplicable
phenomenon: variation in memorability of images is consistent across subjects,
suggesting that some images are intrinsically more memorable than others, independent of a subjects? contexts and biases. In this paper, we used the publicly
available memorability dataset of Isola et al. [13], and augmented the object and
scene annotations with interpretable spatial, content, and aesthetic image properties. We used a feature-selection scheme with desirable explaining-away properties to determine a compact set of attributes that characterizes the memorability of
any individual image. We find that images of enclosed spaces containing people
with visible faces are memorable, while images of vistas and peaceful scenes are
not. Contrary to popular belief, unusual or aesthetically pleasing scenes do not
tend to be highly memorable. This work represents one of the first attempts at
understanding intrinsic image memorability, and opens a new domain of investigation at the interface between human cognition and computer vision.
1
Introduction
(a)
(b)
(c)
(d)
(e)
(f)
Figure 1: Which of these images are the most memorable? See footnote 1 for the answer key.
When glancing at a magazine or browsing the Internet we are continuously exposed to photographs
and images. Despite this overflow of visual information, humans are extremely good at remembering
thousands of pictures and a surprising amount of their visual details [1, 15, 16, 25, 30]. But, while
some images stick in our minds, others are ignored or quickly forgotten. Artists, advertisers, and
photographers are routinely challenged by the question ?what makes an image memorable?? and
are then presented with the task of creating an image that will be remembered by the viewer.
While psychologists have studied human capacity to remember visual stimuli [1,15,16,25,30], little
work has systematically studied the differences in stimuli that make them more or less memorable.
In a recent paper [13], we quantified the memorability of 2222 photographs as the rate at which
subjects detect a repeat presentation of the image a few minutes after its initial presentation. The
memorability of these images was found to be consistent across subjects and across a variety of
contexts, making some of these images intrinsically more memorable than others, independent of
the subjects? past experiences or biases. Thus, while image memorability may seem like a quality
that is hard to quantify, our recent work suggests that it is not an inexplicable phenomenon.
1
FRUU?
corr: ?0.19
M
M
M
FRUU?
U
(a)
m
A
(b) ?U ?M (c) ?U ?M
(d)
(e) ?A ?M (f) ?A ?M
(g)
(h) ?m ?M (i) ?m ?M
Figure 2: Distribution of memorability M of photographs with respect to unusualness U (left), aesthetics A
(middle) and subjects? guess on how memorable an image is m (right). All 2222 images from the memorability
dataset were rated along these three aspects by 10 subjects each. Contrary to popular belief, unusual and
aesthetically pleasing images are not predominantly the most memorable ones. Also shown are example images
that demonstrate this (e.g. (f) shows an image that is very aesthetic, but not memorable). Clearly, which images
are memorable is not intuitive, as seen by poor estimates from subjects (g).
But then again, subjective intuitions of what make an image memorable may need to be revised. For
instance, look at the photographs of Figure 1. Which images do you think are more memorable?1
We polled various human and computer vision experts to get ideas as to what people think drives
memorability. Among the most frequent responses were unusualness (8 out of 16) and aesthetic
beauty (7 out of 16). Surprisingly, as shown in Figure 2, we find that these are weakly correlated
(and, in fact, negatively correlated) with memorability as measured in [13]. Further, when subjects
were asked to rate how memorable they think an image would be, their responses were weakly
(negatively) correlated to true memorability (Figure 2)!
While our previous work aimed at predicting memorability [13], here we aim to better understand
memorability. Any realistic use of the memorability of images requires an understanding of the key
factors that underly memorability; be it for cognitive scientists to discover the mechanisms behind
memory or for advertisement designers to create more effective visual media.
Thus, the goal of this paper is to identify a collection of human-understandable visual attributes that
are highly informative about image memorability. First, we annotate the memorability dataset [13]
with interpretable and semantic attributes. Second, we employ a greedy feature selection algorithm
with desirable explaining-away properties that allows us to explicitly determine a compact set of
characteristics that make an image memorable. Finally, we train automatic detectors that predict
these characteristics, which are in turn used to predict memorability.
2
Related work
Visual memory: People have been shown to have a remarkable ability to remember particular
images in long-term memory, be they everyday scenes, objects and events [30], or the shapes of
arbitrary forms [25]. As most of us would expect, image memorability depends on the user context
and is likely to be subject to some inter-subject variability [12]. However, in our previous work [13],
we found that despite this expected variability, there is also a large degree of agreement between
users. This suggests that there is something intrinsic to images that make some more memorable than
others, and in [13] we developed a computer vision algorithm to predict this intrinsic memorability.
While being a useful goal, prediction systems are often uninterpretable, giving us little insight into
what makes the image memorable. Hence in this work, we focus on identifying the characteristics of
images that make them memorable. A discussion of different models of memory retrieval [3,11,27]
and formation [22] are beyond the scope of this paper.
Attributes for interpretability: Attributes-based visual recognition has received a lot of attention
in computer vision literature in recent years. Attributes can be thought of as mid-level interpretable
features such as ?furry? and ?spacious?. Attributes are attractive because they allow for transferlearning among categories that share attributes [18]. Attributes also allow for descriptions of previously unseen images [8]. In this work, we exploit attributes to understand which properties of an
image make it memorable.
Predicting image properties: While image memorability is vastly unexplored, many other photographic properties have been studied in the literature, such as photo quality [21], saliency [14],
attractiveness [20], composition [10, 24], color harmony [5], and object importance [29]. Most related to our work is the recent work of Dhar et al. [7], who use attributes to predict the aesthetic
quality of an image. Towards the goal of improved prediction, they use a list of attributes known to
influence the aesthetic quality of an image. In our work, since it is not known what makes an image
1
Images (a,d,e) are among the most memorable images in our dataset, while (b,c,f) are among the least.
2
(a) ?attractive
(b) ?funny
(c) ?makes-sad
(d) ?qual. photo
(e) ?peaceful
(f) ?attractive
(g) ?funny
(h) ?makes-sad
(i) ?qual. photo
(j) ?peaceful
Figure 3: Example images depicting varying values of a subset of attributes annotated by subjects.
memorable, we use an exhaustive list of attributes, and use a feature selection scheme to identify
which attributes make an image memorable.
3
Attribute annotations
We investigate memorability using the memorability dataset from [13]. The dataset consists of 2222
natural images of everyday scenes and events selected from the SUN dataset [32], as well as memorability scores for each image. The memorability scores were obtained via 665 subjects playing
a ?memory game? on Amazon?s Mechanical Turk. A series of natural images were flashed for 1
second each. Subjects were instructed to press a key whenever they detected a repeat presentation of
an image. The memorability score of an image corresponds to the number of subjects that correctly
detected a repeat presentation of the image. The rank correlation between two halves of the subjects
was found to be 0.75, providing evidence for intrinsic image memorability. Examples images from
this dataset can be seen throughout the paper.
The images in the memorability dataset come from ?700 scene categories [32]. They have been
labeled via the LabelMe [26] online annotation tool, and contain about ?1300 object categories.
While the scene and object categories depicted in an image may very well influence its memorability,
there are many other properties of an image that could be at play. To get a handle on these, we
constructed an extensive list of image properties or attributes, and had the 2222 images annotated
with these properties using Amazon?s Mechanical Turk. An organization of the attributes collected
is shown in Table 1. Binary attributes are listed with a ???, while multi-valued attributes (on a scale
of 1-5) are listed with a ?;?. Each image was annotated by 10 subjects for each of the attributes.
The average response across the subjects was stored as the value of the attribute for an image. The
?Length of description? attribute was computed as the average number of words subjects used to
describe the image (free-form). The spatial layout attributes were based on the work of Oliva and
Torralba [23]. Many of the aesthetic attributes are based on the work of Dhar et al. [7].
We noticed that images containing people tend to be highly memorable. However even among
images containing people, there is a variation in memorability that is consistent across subjects (split
half rank correlation = 0.71). In an effort to better understand memorability of images containing
people, we collected several attributes that are specific to people. These are listed in Table 2. The
annotations of these attributes were collected only on images containing people (and are considered
to be absent for images not containing people). This is compactly captured by the ?contains a person?
attribute.
Some questions had multiple choice answers (for example, Age can take four values: child, teenager,
adult and senior). When applicable, the multiple choices are listed in parentheses in Table 2. Each
choice was treated as a separate binary attribute (e.g. is child). Some of the people-attributes were
referring to the entire image (?whole image?) while others were referring to each person in the image
(?per-person?). The per-person attributes were aggregated across all subjects and all people in the
image. See Figure 3 for example attribute annotations.
3
Table 1: General attributes
Spatial layout: Enclosed space vs. Open space; Perspective view vs. Flat view; Empty space vs.
Cluttered space; Mirror symmetry vs. No mirror symmetry (cf. [23])
Aesthetics: Post-card like? Buy this painting? Hang-on wall? Is aesthetic? Pleasant vs. Unpleasant; Unusual or strange vs. Routine or mundane; Boring vs. Striking colors; High quality (expert
photography) vs. Poor quality photo; Attractive vs. Dull photo; Memorable vs. Not memorable; Sky
present? Clear vs. Cloudy sky; Blue vs. Sunset sky; Picture of mainly one object vs. Whole scene;
Single focus vs. Many foci; Zoomed-in vs. Zoomed-out; Top down view vs. Side view (cf. [7])
Emotions: Frightening? Arousing? Funny? Engaging? Peaceful? Exciting? Interesting? Mysterious? Strange? Striking? Makes you happy? Makes you sad?
Dynamics: Action going on? Something moving in scene? Picture tells a story? About to happen?
Lot going on? Dynamic scene? Static scene? Have a lot to say; Length of description
Location: Famous place? Recognize place? Like to be present in scene? Many people go here?
Contains a person?
Magnitude of correlation with memorability
For further analysis, we utilize the most freEnclosed space
quent 106 of the ?1300 objects present in the 0.36
Person: face visible
images (their presence, count, area in the imPerson: eye contact
age, and for a subset of these objects, area
Sky
Number of people in image
occupied in four quadrants of the image), 237
of the ?700 scene categories, and the 127
attributes listed in Tables 1 and 2. We also
Attributes
append image annotations with a scene hiScenes
Objects
erarchy provided with the SUN dataset [32]
that groups similar categories into a metacategory (e.g.indoor), as well as an object hierarchy derived from the WordNet [9], that
includes meta-categories such as organism
and furniture. The scene hierarchy resulted
in 19 additional scene meta-categories, while
922
Features
the object hierarchy resulted in 134 additional
meta-categories. From here on, we will re- Figure 4: Correlation of attribute, scene, and object anfer to all these annotations as features. We notations with memorability. We see that the attributes
have a total of 923 features. The goal now are most strongly correlated with memorability. Many of
is to determine a concise subset of these fea- the features are correlated with each other (e.g. face vistures that characterizes the memorability of ible and eye contact), suggesting a need for our feature
an image. Since all our features are human- selection strategy to have explaining-away properties.
interpretable, this allows us to gain an understanding of what makes an image memorable. Figure 4
shows the correlation of different feature types with memorability.
4
Feature selection
Our goal is to identify a compact set of features that characterizes the memorability of an image.
We note that several of our features are redundant. Some by design (such as pleasant and aesthetic)
to better capture subjective notions, but others due to contextual relationships that prevail in our
visual world (e.g. outdoor images typically contain sky). Hence, it becomes crucial that our feature
selection algorithm has explaining away properties so as to determine a set of distinct characteristics
that make an image memorable. Not only is this desirable via the Occam?s razor view, it is also
practical from an applications stand-point.
Moreover, we note that some features in our set subsume other features. For example, since the
person attributes (e.g. hair-color) are only labeled for images containing people, they include the
person presence / absence information in them. If a naive feature selection approach picked ?haircolor? as an informative feature, it would be unclear whether the mere presence or absence of a
person in the image is what contributes to memorability, or if the color of the hair really matters.
This issue of miscalibration of information contained in a feature also manifests itself in a more
subtle manner. Our set of features include inherently multi-valued information (e.g. mood of the
4
Table 2: Attributes describing people in image
Visibility (per-person): Face visible? Making eye-contact?
Demographics (per-person): Gender (male, female)? Age (child, teenager, adult, senior)? Race
(Caucasian, SouthEast-Asian, East-Asian, African-American, Hispanic)?
Appearance (per-person): Hair length (short, medium, long, bald)? Hair color (blonde, black,
brown, red, grey)? Facial hair?
Clothing (per-person): Attire (casual, business-casual, formal)? Shirt? T-shirt? Blouse? Tie?
Jacket? Sweater? Sweat-shirt? Skirt? Trousers? Shorts? A uniform?
Accessories (per-person): Dark eye-glasses? Clear eye-glasses? Hat? Earrings? Watch? Wrist
jewelry? Neck jewelry? Belt? Finger Ring(s)? Make-up?
Activity (per-person): Standing? Sitting? Walking? Running? Working? Smiling? Eating?
Clapping? Engaging in art? Professional activity? Buying? Selling? Giving a speech? Holding?
Activity (whole image): Sports? Adventurous? Tourist? Engaging in art? Professional? Group?
Subject (whole image): Audience? Crowd? Group? Couple? Individual? Individuals interacting?
Scenario (whole image): Routine/mundane? Unusual/strange? Pleasant? Unpleasant? Top-down?
image), as well as inherently binary information like ?a car is present in the image?. It is important
to calibrate the features by the amount of information captured by them.
Employing an information-theoretic approach to feature selection allows us to naturally capture both
these goals: selecting a compact set of non-redundant features and calibrating features based on the
information they contain.
4.1
Information-theoretic
We formulate our problem as that of selecting features that maximize mutual information with memorability, such that the total number of bits required to encode all selected features (i.e. the number
of bits required to describe an image using the selected features) does not exceed B. Formally,
F ? = arg max I (F ; M )
s.t. C(F ) ? B
(1)
where F is a subset of the features, I (F ; M ) is the mutual information between F and memorability
M , B is the budget (in bits), and C(F ) is the total number of bits required to encode F . We assume
that each feature is encoded independently, and thus
C(F ) =
n
X
C(fi ), fi ? F
(2)
i=1
where C(fi ) is the number of bits required to encode feature fi , computed as H(fi ), the entropy of
feature fi across the training images.
This optimization is combinatorial in nature, and is NP-hard to solve. Fortunately, the work of
Krause et al. [17] and Leskovec et al. [19] provides us with a computationally feasible algorithm
to solve the problem. Krause et al. [17] showed mutual information to be a submodular function.
A greedy optimization scheme to maximize submodular functions was shown to be optimal, with a
constant approximation factor of (1 ? 1e ); i.e. no polynomial time algorithm can provide a tighter
bound. Subsequently, Leskovec et al. [19] presented a similar greedy algorithm to select features,
where each feature has a different cost associated with it (as in our set-up). The algorithm selects
features with the maximum ratio of improvement in mutual information to their cost, while the total
cost of the features does not exceed the allotted budget. In parallel, the cost-less version of the
greedy algorithm is also used to select features (still not exceeding budget). Finally, of the two, the
set of features that provides the higher mutual information is retained. This solution is at most a
constant factor 12 (1 ? 1e ) away from the optimal solution [19]. Moreover, Leskovec et al. [19] also
provided a lazy evaluation scheme that provides significant computation benefits in practice, while
still maintaining the bound.
5
However, this lazy-greedy approach still requires the computation of mutual information between
memorability and subsets of features. At each iteration, the additional information provided by a
candidate feature fi over an existing set of features F would be the following:
IG (fi ) = I (F ? fi ; M ) ? I (F ; M )
(3)
This computation is not feasible given our large number of features and limited training data. Hence,
we greedily add features that maximize an approximation to the mutual information between a subset
of features and memorability, as also employed by Ullman et al. [31]. The additional information
provided by a candidate feature fi over an existing set of features F is approximated as:
? (fi ) = min (I (fj ? fi ; M ) ? I (fj ; M )) , fj ? F
IG
j
(4)
The ratio of this approximation to the cost of the feature is used as the score to evaluate the usefulness
of features during greedy selection. Intuitively, this ensures that the feature selected at each iteration maximizes the per-bit minimal gain in mutual information over each of the individual features
already selected.
In order to maximize the mutual information (approximation) beyond the greedy algorithm, we
employ multiple passes on the feature set. Given a budget B, we first greedily add features using
a budget of 2B, and then greedily remove features (that reduce the mutual information the least)
until we fall within the allotted budget B. This allows for the features that were added greedily
early on in the forward pass, but are explained away by subsequently added features, to be dropped.
These forward and backward passes are repeated 4 times each. Note that at each pass, the objective
function cannot decrease, and the final solution is still guaranteed to have a total cost within the
allotted budget B.
4.2
Predictive
The behavior of the above approximation to mutual information has not been formally studied.
While this may provide a good means to prune out many candidate features, it is unclear how close
to optimal the selections will be. Feature selection within the realm of a predictive model allows us
to better capture features that achieve a concrete and practical measure of performance: ?which set
of features allows us to make the best predictions about an image?s memorability?? While selecting
such features would be computationally expensive to do over all our 923 features, using a pruned set
of features obtained via information-theoretic selection makes this feasible. We employ a support
vector regressor (SVR, [28]) as our predictive model.
Given a set of features selected by the information-theoretic method above, we greedily select features (again, while maintaining a budget) that provide the biggest boost in regression performance
(Spearman?s rank correlation between predicted and ground truth memorabilities) over the training
set. The same cost-based lazy-greedy selection algorithm is used as above, except with only a single
pass over the feature set. This is inspired from the recent work of Das et al. [6], who analyzed
the performance of greedy approaches to maximize submodular-like functions. They found that the
submodularity ratio of a function is the best predictor of how well a greedy algorithm performs.
Moreover, they found that in practice, regression performance has a high submodularity ratio, justifying the use of a greedy approach.
An alternative to greedy feature selection would be to learn a sparse-regressor. However, the parameter that controls the sparsity of the vector is not intuitive and interpretable. In the greedy feature
selection approach, the budget of bits, which is interpretable, can be explicitly enforced.
5
Results
Attribute annotations help: We first tested the degree to which each general feature-type annotation in our feature set is effective at predicting memorability. We split the dataset from [13] into
2/3 training images scored by half the subjects and 1/3 test images scored by the left out half of
the subjects. We trained -SVRs [4] to predict memorability, using grid search to select cost and
hyperparameters. For the new attributes we introduced, and for the object and scene hierarchy
6
features, we used RBF kernels, while for the rest of the features we used the same kernel functions
as in [13]. We report performance as Spearman?s rank correlation (?) between predicted and ground
truth memorabilities averaged over 10 random splits of the data.
Results are shown in Table 3. We found that our new Table 3: Performance (rank correlation)
attributes annotations performed quite well (? = 0.528): of different types of features at predicting
they outperform higher dimensional object and scene image memorability.
annotations.
Feature type
Perf
Feature selection: We next selected the individual best
Object
annotations
0.494
features in our set according to the feature selection alScene
annotations
0.415
gorithms described above. To compute feature entropy
Attribute
annotations
0.528
and mutual information, we used histogram estimators
Objects
+
Scenes
+
Attributes
0.554
on our training data, with 7 bins per feature and 10 bins
for memorability. Using these estimators, and measuring feature set cost according to (2), our entire set of 923 features has a total cost of 252 bits. We
selected reduced feature sets by both running information-theoretic selection and predictive selection on our 2/3 training splits, for budgets ranging from 1 to 100-bits.
0.55
0.50
0.45
Rank corr
For predictive selection, we further split our training set in half
and trained SVRs on one half to predict memorability on the
other half. At each iteration of selection, we greedily selected
the feature that maximized predictive performance averaged
over 3 random splits trials, with predictive performance again
measured as rank correlation between predictions and ground
truth memorabilities. Since predictive selection is computational expensive, we reduced our candidate feature set by first
pruning with information-theoretic selection. We took as candidates the union of all features that were selected using our
information-theoretic approach for a budgets 1,2,...,100 bits.
Taking this union, rather than just the features selected at a
100-bit budget, ensures that candidates were not missed when
they are only effective in small budget sets.
0.40
0.35
Information-theoretic
Predictive
Random
0.30
0.25
0.20
0.15
0.10
1
2
3
4
5
log2 Bit Budget
6
7
Figure 5: Regression performance vs.
log bit budget of various types of feature selection. The diminishing returns
(submodular-like) behavior is evident.
Next, we validated our selections on our 1/3 test set. We
trained SVRs using each of our selected feature sets and made
predictions on the test set. Both selection algorithms create feature sets that are similarly effective
at predicting memorability (Figure 5). Using just a 16-bit budget, information-theoretic selection
achieves ? = 0.472, and predictive selection achieves ? = 0.490 (this budget resulted in selected
sets with 6 to 11 features). This performance is comparable to the performance we get using much
costlier features, such as our full list of object annotations (540 features, ?106 bits, ? = 0.490). As
a baseline, we also compared against randomly selecting feature sets up to the same budget, which,
for 16 bits, only gives ? = 0.119.
We created a final list of features by run- Table 4: Information-theoretic and predictive feaning the above feature selection methods ture selections for a budget of 10 bits. Correlations
on the entire dataset (no held out data) with memorability are listed after each feature (arrow
for a budget of 10 bits. This produced indicates direction of correlation).
Selections and
the sets listed in Table 4. If one is trying correlations run on entire dataset.
to understand memorability, these feaInformation-theoretic
Predictive
tures are a good place to start. In Fig? enclosed space
0.39 ? enclosed space
0.39
ure 6, we explore these features further
? face visible
0.37 ? face visible
0.37
by hierarchically clustering our images
? peaceful
-0.33 ? tells a story
0.18
according to predictive set. Each cluster
? sky present
-0.35 ? recognize place
0.16
can be thought of as specifying type of
? peaceful
-0.33
image with respect to memorability. For
example, on the far right we have highly memorable ?pictures of people in an enclosed space? and
on the far left we have forgettable ?peaceful, open, unfamiliar spaces, devoid of people.?
Automatic prediction: While our focus in this paper is on understanding memorability, we hope
that by understanding the phenomenon we may also be able to build better automatic predictors of
7
enclosed_space > 0.47
face_visible > 0.47
peaceful > 0.75
face_visible > 0.30
face_visible > 0.21
recognize_place > 0.45
0.63
0.67
recognize_place > 0.55
0.77
0.57
0.85
peaceful > 0.75
0.72
0.84
0.66
0.63
(a) Hierarchical clustering
Figure 6: Hierarchical clustering of images in ?memorability space? as achieved via a regression-tree [2], along
with examples images from each cluster. Memorability of each cluster given at the leaf nodes, and also depicted
as shade of cluster image borders (darker borders correspond to lower memorability than brighter borders).
it. The only previous work predicting memorability is our recent paper [13]. In that paper, we made
predictions on the basis of a suite of global image features ? pixel histograms, GIST, SIFT, HOG,
SSIM [13]. Running the same methods on our current 2/3 data splits achieves ? = 0.468. Here we
attempt to do better by using our selected features as an abstraction layer between raw images and
memorability.
We trained a suite of SVRs to predict annotations from images, and Table 5: Performance (rank
another SVR to predict memorability from these predicted anno- correlation) of automatic
tations. For image features, we used the same methods as [13]. memorability
prediction
For the annotation types, we used the feature types selected by our methods.
100-bit predictive selection on 2/3 training sets. To predict the anFeatures
Perf.
notations for each image in our training set, we split the training
Direct
[13]
0.468
set in half and predicted annotations for one half by training on the
Indirect
0.436
other half, and vice versa, covering both halves with predictions.
Direct
+
indirect
0.479
We then trained a final SVR to predict memorability on the test set
in three ways: 1) using only image features (Direct), 2) using only predicted annotations (Indirect),
and 3) using both (Direct + Indirect) (Table 5). Combining indirect predictions with direct predictions performed best (? = 0.479), slightly outperforming the direct prediction method of our previous
work [13] (? = 0.468).
6
Conclusion
The goal of this work was to characterize aspects of an image that make it memorable. Understanding these characteristics is crucial for anyone hoping to work with memorability, be they psychologists, advertisement-designers, or photographers. We augmented the object and scene annotations
of the dataset of Isola et al. [13] with attribute annotations describing the spatial layout, content,
and aesthetic properties of the images. We employed a greedy feature selection scheme to obtain
compact lists of features that are highly informative about memorability and highly predictive of
memorability. We found that images of enclosed spaces containing people with visible faces are
memorable, while images of vistas and peaceful settings are not. Contrary to popular belief, unusualness and aesthetic beauty attributes are not associated with high memorability ? in fact, they are
negatively correlated with memorability ? and these attributes are not among our top few selections,
indicating that other features more concisely describe memorability (Figure 4).
Through this work, we have begun to uncover some of the core features that contribute to image
memorability. Understanding how these features interact to actually produce memories remains an
important direction for future research. We hope that by parsing memorability into a concise and
understandable set of attributes, we have provided a description that will interface well with other
domains of knowledge and may provide fodder for future theories and applications of memorability.
Acknowledgements: We would like to thank Jianxiong Xiao for providing the global image features. This work is supported by the National Science Foundation under Grant No. 1016862 to
A.O., CAREER Awards No. 0546262 to A.O and No. 0747120 to A.T. A.T. was supported in part
by the Intelligence Advanced Research Projects Activity via Department of the Interior contract
D10PC20023, and ONR MURI N000141010933.
8
References
[1] T. F. Brady, T. Konkle, G. A. Alvarez, and A. Oliva. Visual long-term memory has a massive storage
capacity for object details. In Proceedings of the National Academy of Sciences, 2008.
[2] L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and regression trees. Boca Raton, FL:
CRC Press, 1984.
[3] G. D. A. Brown, I. Neath, and N. Chater. A temporal ratio model of memory. Psych. Review, 2007.
[4] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001.
[5] D. Cohen-Or, O. Sorkine, R. Gal, T. Leyvand, and Y.-Q. Xu. Color harmonization. ACM Transactions on
Graphics (Proceedings of ACM SIGGRAPH), 2006.
[6] A. Das and D. Kempe. Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection. In arXiv:1102.3975v2 [stat.ML], 2011.
[7] S. Dhar, V. Ordonez, and T. L. Berg. High level describable attributes for predicting aesthetics and
interestingness. In IEEE Computer Vision and Pattern Recognition, 2011.
[8] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In IEEE Computer
Vision and Pattern Recognition, 2009.
[9] C. Fellbaum. Wordnet: an electronic lexical database. In The MIT Press, 1998.
[10] B. Gooch, E. Reinhard, C. Moulding, and P. Shirley. Artistic composition for image creation. In Eurographics Workshop on Rendering, 2001.
[11] M. W. Howard and M. J. Kahana. A distributed representation of temporal context. In Journal ofMathematical Psychology, 2001.
[12] R. R. Hunt and J. B. Worthen. Distinctiveness and memory. In NY:Oxford Univeristy Press, 2006.
[13] P. Isola, J. Xiao, A. Torralba, and A. Oliva. What makes an image memorable? In IEEE Computer Vision
and Pattern Recognition, 2011.
[14] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. In
Pattern Analysis and Machine Intelligence, 1998.
[15] T. Konkle, T. F. Brady, G. A. Alvarez, and A. Oliva. Conceptual distinctiveness supports detailed visual
long-term memory for realworld objects. In Journal of Experimental Psychology: General, 2010.
[16] T. Konkle, T. F. Brady, G. A. Alvarez, and A. Oliva. Scene memory is more detailed than you think: the
role of categories in visual longterm memory. In Psychological Science, 2010.
[17] A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models. In Conference on Uncertainty in Artificial Intelligence, 2005.
[18] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between class
attribute transfer. In IEEE Computer Vision and Pattern Recognition, 2009.
[19] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective outbreak
detection in networks. In ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, 2007.
[20] T. Leyvand, D. Cohen-Or, G. Dror, and D. Lischinski. Data-driven enhancement of facial attractiveness.
ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2008), 2008.
[21] Y. Luo and X. Tang. Photo and video quality evaluation: Focusing on the subject. In European Conference
on Computer Vision, 2008.
[22] J. L. McClelland, B. L. McNaughton, and R. C. O?Reilly. Why there are complementary learning systems
in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of
learning and memory. In Psychological Review, 1995.
[23] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. In International Journal of Computer Vision, 2001.
[24] L. Renjie, C. L. Wolf, and D. Cohen-Or. Optimizing photo composition. In Technical report, Tel-Aviv
University, 2010.
[25] I. Rock and P. Englestein. A study of memory for visual form. The American Journal of Psychology,
1959.
[26] B. C. Russell, A. Torralba, K. Murphy, and W. T. Freeman. Labelme: A database and web-based tool for
image annotation. In International Journal of Computer Vision, 2008.
[27] R. M. Shiffrin and M. Steyvers. A model for recognition memory: Rem - retrieving effectively from
memory. In Psychnomic Bulletin and Review, 1997.
[28] A. J. Smola and B. Schlkopf. A tutorial on support vector regression. Statistics and Computing, 14:199?
222, 2004.
[29] M. Spain and P. Perona. Some objects are more equal than others: measuring and predicting importance.
In Proceedings of the European Conference on Computer Vision, 2008.
[30] L. Standing. Learning 10,000 pictures. In Quarterly Journal of Experimental Psychology, 1973.
[31] S. Ullman, M. Vidal-Naquet, and E. Sali. Visual features of intermediate complexity and their use in
classification. In Nature Neuroscience, 2002.
[32] J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from
abbey to zoo. In IEEE Conference on Computer Vision and Pattern Recognition, 2010.
9
| 4451 |@word trial:1 version:1 longterm:1 middle:1 polynomial:1 hippocampus:1 open:3 grey:1 concise:2 photographer:3 initial:1 series:1 score:4 contains:2 selecting:4 hoiem:1 subjective:3 past:1 existing:2 current:1 contextual:1 surprising:1 luo:1 parsing:1 chicago:1 visible:6 realistic:1 underly:1 informative:3 shape:2 happen:1 visibility:1 remove:1 interpretable:6 gist:1 hoping:1 v:17 greedy:15 selected:15 guess:1 half:11 caucasian:1 leaf:1 intelligence:3 ofmathematical:1 short:2 core:1 provides:3 node:1 location:1 contribute:1 belt:1 along:2 constructed:1 direct:6 retrieving:1 consists:1 manner:1 inter:1 expected:1 rapid:1 behavior:2 multi:2 shirt:3 buying:1 inspired:1 freeman:1 rem:1 costlier:1 little:2 farhadi:1 becomes:1 provided:5 discover:1 notation:2 moreover:3 maximizes:1 medium:2 project:1 spain:1 what:8 psych:1 dror:1 developed:1 gal:1 brady:3 suite:2 forgotten:1 remember:3 unexplored:1 sky:6 temporal:2 tie:1 stick:1 control:1 grant:1 interestingness:1 scientist:1 dropped:1 forgettable:1 despite:2 leyvand:2 oxford:1 ure:1 meet:1 black:1 studied:4 quantified:1 suggests:2 specifying:1 jacket:1 limited:1 hunt:1 averaged:2 practical:2 harmeling:1 wrist:1 practice:2 union:2 area:2 thought:2 reilly:1 word:1 quadrant:1 get:3 cannot:1 close:1 selection:36 svr:7 interior:1 memorability:76 context:4 influence:2 storage:1 jewelry:2 lexical:1 layout:3 attention:2 go:1 cluttered:1 independently:1 formulate:1 amazon:2 identifying:1 insight:2 estimator:2 steyvers:1 handle:1 notion:1 variation:2 mcnaughton:1 hierarchy:4 play:1 magazine:1 user:2 massive:1 agreement:1 engaging:3 recognition:8 approximated:1 walking:1 expensive:2 muri:1 labeled:2 sunset:1 database:3 role:1 capture:3 boca:1 thousand:1 ensures:2 sun:3 decrease:1 russell:1 intuition:1 complexity:1 asked:1 dynamic:2 trained:5 weakly:2 exposed:1 predictive:15 purely:1 negatively:3 creation:1 basis:1 compactly:1 selling:1 siggraph:2 indirect:5 routinely:2 various:2 vista:2 finger:1 train:1 distinct:1 effective:5 describe:3 detected:2 artificial:1 tell:2 formation:1 crowd:1 exhaustive:1 quite:1 encoded:1 valued:2 solve:2 say:1 ability:1 statistic:1 unseen:2 think:4 itself:1 final:3 online:1 mood:1 rock:1 took:1 polled:1 zoomed:2 fea:1 frequent:1 combining:1 holistic:1 shiffrin:1 achieve:1 academy:1 konkle:3 intuitive:2 description:4 everyday:2 empty:1 cluster:4 enhancement:1 produce:1 tti:1 ring:1 object:23 help:1 stat:1 measured:2 received:1 aesthetically:2 predicted:5 come:1 quantify:1 direction:2 submodularity:2 annotated:3 attribute:51 subsequently:2 human:6 bin:2 crc:1 wall:1 investigation:1 really:1 harmonization:1 tighter:1 viewer:2 clothing:1 koch:1 considered:1 ground:3 lischinski:1 cognition:1 predict:10 scope:1 bald:1 hispanic:1 achieves:3 dictionary:1 abbey:1 torralba:7 early:1 applicable:1 harmony:1 combinatorial:1 southeast:1 vice:1 create:2 tool:2 hope:2 mit:7 clearly:1 aim:1 rather:1 occupied:1 beauty:2 varying:1 eating:1 breiman:1 chater:1 encode:3 derived:1 focus:4 validated:1 improvement:1 rank:8 indicates:1 mainly:1 sigkdd:1 greedily:6 baseline:1 detect:2 glass:2 abstraction:1 entire:4 typically:1 diminishing:1 perona:1 going:2 selects:1 pixel:1 issue:1 among:6 classification:2 arg:1 spatial:5 art:2 kempe:1 mutual:12 univeristy:1 emotion:1 equal:1 represents:1 look:1 future:2 others:7 stimulus:2 np:1 report:2 few:2 employ:3 connectionist:1 randomly:1 recognize:2 resulted:3 individual:5 asian:2 national:2 murphy:1 attempt:2 pleasing:2 friedman:1 organization:1 detection:1 highly:6 investigate:1 mining:1 evaluation:2 male:1 analyzed:1 behind:1 held:1 experience:1 facial:2 tree:2 re:1 skirt:1 leskovec:4 minimal:1 psychological:2 instance:1 modeling:1 measuring:2 challenged:1 calibrate:1 artistic:1 cost:11 subset:7 uniform:1 usefulness:1 predictor:2 graphic:2 characterize:1 stored:1 answer:2 endres:1 nickisch:1 referring:2 person:15 devoid:1 international:3 standing:2 contract:1 regressor:2 continuously:1 quickly:1 concrete:1 again:3 vastly:1 eurographics:1 containing:8 cognitive:1 creating:2 expert:2 american:2 itti:1 return:1 ullman:2 suggesting:2 includes:1 matter:1 forsyth:1 explicitly:2 race:1 depends:1 performed:2 view:5 lot:3 picked:1 characterizes:3 red:1 start:1 parallel:1 annotation:22 publicly:1 characteristic:5 who:2 maximized:1 sitting:1 identify:3 saliency:2 painting:1 correspond:1 famous:1 raw:1 artist:2 produced:1 schlkopf:1 mere:1 niebur:1 zoo:1 drive:1 casual:2 earring:1 african:1 n000141010933:1 footnote:1 detector:1 whenever:1 against:1 failure:1 mysterious:1 turk:2 naturally:1 arousing:1 associated:2 static:1 couple:1 gain:2 dataset:14 begun:1 intrinsically:2 popular:3 manifest:1 color:6 car:1 realm:1 knowledge:2 sorkine:1 subtle:1 routine:2 uncover:1 actually:1 fellbaum:1 focusing:1 higher:2 response:3 improved:1 alvarez:3 strongly:1 just:2 accessory:1 smola:1 correlation:13 until:1 working:1 web:1 glance:1 quality:7 qual:2 ordonez:1 aude:1 aviv:1 phillip:1 smiling:1 contain:3 true:1 brown:2 calibrating:1 hence:3 dull:1 furry:1 flashed:1 semantic:1 attractive:4 game:1 during:1 razor:1 covering:1 trying:1 stone:1 evident:1 theoretic:11 demonstrate:1 performs:1 interface:2 fj:3 image:115 photography:1 ranging:1 parikh:1 fi:12 predominantly:1 nonmyopic:1 cohen:3 organism:1 significant:1 composition:3 unfamiliar:1 versa:1 automatic:4 grid:1 similarly:1 submodular:5 had:2 moving:1 add:2 something:2 recent:7 female:1 perspective:1 showed:1 optimizing:1 driven:1 scenario:1 hay:1 meta:3 binary:3 remembered:1 outperforming:1 inexplicable:2 onr:1 success:1 guestrin:2 seen:2 captured:2 additional:4 remembering:1 isola:4 fortunately:1 employed:2 prune:1 determine:4 aggregated:1 advertiser:2 redundant:2 maximize:5 multiple:3 desirable:3 photographic:1 full:1 technical:1 long:4 retrieval:1 lin:1 justifying:1 post:1 award:1 parenthesis:1 prediction:12 regression:6 oliva:9 hair:5 vision:13 arxiv:1 annotate:1 iteration:3 kernel:2 histogram:2 achieved:1 audience:1 krause:4 crucial:2 envelope:1 rest:1 pass:2 subject:25 tend:2 contrary:3 seem:2 near:1 presence:3 exceed:2 aesthetic:13 split:8 ture:1 rendering:1 variety:1 intermediate:1 psychology:4 brighter:1 sweat:1 idea:1 reduce:1 teenager:2 absent:1 whether:1 effort:1 speech:1 action:1 antonio:1 ignored:1 useful:1 pleasant:3 aimed:1 listed:7 clear:2 detailed:2 amount:2 dark:1 mid:1 reinhard:1 neocortex:1 category:10 mcclelland:1 reduced:2 outperform:1 tutorial:1 designer:2 neuroscience:1 correctly:1 per:10 blue:1 group:3 key:3 four:2 shirley:1 libsvm:1 utilize:1 backward:1 dhar:3 trouser:1 year:1 enforced:1 run:2 realworld:1 you:4 uncertainty:1 striking:2 place:4 throughout:1 strange:3 electronic:1 funny:3 sad:3 missed:1 sali:1 uninterpretable:1 mundane:2 bit:19 comparable:1 bound:2 internet:1 layer:1 guaranteed:1 furniture:1 fl:1 activity:4 scene:25 flat:1 aspect:2 cloudy:1 extremely:1 ible:1 min:1 pruned:1 anyone:1 department:1 according:3 kahana:1 poor:2 miscalibration:1 spearman:2 across:7 slightly:1 vanbriesen:1 describable:1 making:2 psychologist:2 outbreak:1 intuitively:1 explained:1 computationally:2 previously:1 remains:1 turn:1 count:1 mechanism:1 describing:3 mind:1 demographic:1 unusual:4 photo:7 available:1 clapping:1 vidal:1 quarterly:1 hierarchical:2 away:6 spectral:1 v2:1 alternative:1 faloutsos:1 professional:2 hat:1 top:3 running:3 cf:2 include:2 clustering:3 graphical:1 log2:1 maintaining:2 exploit:1 giving:2 build:1 overflow:1 contact:3 objective:1 noticed:1 question:2 already:1 added:2 strategy:1 unclear:2 separate:1 card:1 thank:1 capacity:2 collected:3 length:3 retained:1 relationship:1 providing:2 happy:1 ratio:5 olshen:1 holding:1 hog:1 append:1 design:1 understandable:2 ssim:1 revised:1 howard:1 subsume:1 variability:2 interacting:1 arbitrary:1 ttic:1 raton:1 introduced:1 mechanical:2 required:4 extensive:1 concisely:1 boost:1 renjie:1 adult:2 beyond:2 able:1 pattern:6 indoor:1 sparsity:1 interpretability:1 memory:16 max:1 belief:3 video:1 event:2 natural:2 treated:1 business:1 predicting:8 advanced:1 scheme:5 tations:1 rated:1 eye:5 library:1 picture:5 created:1 perf:2 naive:1 review:3 understanding:8 literature:2 acknowledgement:1 discovery:1 expect:1 interesting:1 tures:1 enclosed:6 remarkable:1 age:3 foundation:1 degree:2 consistent:3 xiao:3 exciting:1 story:2 systematically:1 playing:1 share:1 occam:1 repeat:3 surprisingly:1 free:1 supported:2 bias:2 allow:2 understand:4 senior:2 side:1 explaining:4 formal:1 face:7 fall:1 taking:1 distinctiveness:2 sparse:2 bulletin:1 benefit:1 distributed:1 world:1 stand:1 instructed:1 collection:1 forward:2 made:2 ig:2 employing:1 far:2 transaction:2 pruning:1 compact:5 hang:1 peaceful:10 tourist:1 ml:1 global:2 buy:1 conceptual:1 search:1 why:1 table:12 nature:2 learn:1 transfer:1 career:1 inherently:2 tel:1 depicting:1 symmetry:2 contributes:1 interact:1 european:2 adventurous:1 domain:2 da:2 hierarchically:1 arrow:1 whole:5 border:3 scored:2 hyperparameters:1 lampert:1 child:3 repeated:1 complementary:1 xu:1 augmented:2 memorable:31 biggest:1 attractiveness:2 fig:1 ehinger:1 darker:1 gorithms:1 ny:1 exceeding:1 candidate:6 outdoor:1 advertisement:2 tang:1 minute:1 down:2 shade:1 specific:1 boring:1 sift:1 list:6 evidence:1 intrinsic:5 workshop:1 corr:2 importance:2 prevail:1 mirror:2 effectively:1 magnitude:1 budget:20 browsing:1 entropy:2 depicted:2 photograph:4 likely:1 appearance:1 explore:1 devi:1 visual:14 lazy:3 contained:1 sport:1 watch:1 chang:1 gender:1 corresponds:1 truth:3 wolf:1 acm:5 goal:7 presentation:4 rbf:1 towards:1 labelme:2 absence:2 content:2 hard:2 feasible:3 naquet:1 except:1 wordnet:2 total:6 neck:1 pas:3 experimental:2 east:1 indicating:1 formally:2 select:4 unpleasant:2 allotted:3 people:18 support:4 berg:1 jianxiong:1 evaluate:1 tested:1 phenomenon:3 correlated:6 |
3,813 | 4,452 | Convergence Rates of Inexact Proximal-Gradient
Methods for Convex Optimization
Mark Schmidt
[email protected]
Nicolas Le Roux
[email protected]
Francis Bach
[email protected]
INRIA - SIERRA Project Team
?
Ecole
Normale Sup?erieure, Paris
Abstract
We consider the problem of optimizing the sum of a smooth convex function and
a non-smooth convex function using proximal-gradient methods, where an error
is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic
proximal-gradient method and the accelerated proximal-gradient method achieve
the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates. Using these rates, we perform as well as or better than
a carefully chosen fixed error level on a set of structured sparsity problems.
1
Introduction
In recent years the importance of taking advantage of the structure of convex optimization problems
has become a topic of intense research in the machine learning community. This is particularly
true of techniques for non-smooth optimization, where taking advantage of the structure of nonsmooth terms seems to be crucial to obtaining good performance. Proximal-gradient methods and
accelerated proximal-gradient methods [1, 2] are among the most important methods for taking
advantage of the structure of many of the non-smooth optimization problems that arise in practice.
In particular, these methods address composite optimization problems of the form
minimize
x?Rd
f (x) := g(x) + h(x),
(1)
where g and h are convex functions but only g is smooth. One of the most well-studied instances of
this type of problem is `1 -regularized least squares [3, 4],
minimize
x?Rd
1
kAx ? bk2 + ?kxk1 ,
2
where we use k ? k to denote the standard `2 -norm.
Proximal-gradient methods are an appealing approach for solving these types of non-smooth optimization problems because of their fast theoretical convergence rates and strong practical performance.
? While classical subgradient methods only achieve an error level on the objective function of
O(1/ k) after k iterations, proximal-gradient methods have an error of O(1/k) while accelerated
proximal-gradient methods futher reduce this to O(1/k 2 ) [1, 2]. That is, accelerated proximalgradient methods for non-smooth convex optimization achieve the same optimal convergence rate
that accelerated gradient methods achieve for smooth optimization.
Each iteration of a proximal-gradient method requires the calculation of the proximity operator,
proxL (y) = arg min
x?Rd
1
L
kx ? yk2 + h(x),
2
(2)
where L is the Lipschitz constant of the gradient of g. We can efficiently compute an analytic solution to this problem for several notable choices of h, including the case of `1 -regularization and disjoint group `1 -regularization [5, 6]. However, in many scenarios the proximity operator may not have
an analytic solution, or it may be very expensive to compute this solution exactly. This includes important problems such as total-variation regularization and its generalizations like the graph-guided
fused-LASSO [7, 8], nuclear-norm regularization and other regularizers on the singular values of
matrices [9, 10], and different formulations of overlapping group `1 -regularization with general
groups [11, 12]. Despite the difficulty in computing the exact proximity operator for these regularizers, efficient methods have been developed to compute approximate proximity operators in all of
these cases; accelerated projected gradient and Newton-like methods that work with a smooth dual
problem have been used to compute approximate proximity operators in the context of total-variation
regularization [7, 13], Krylov subspace methods and low-rank representations have been used to
compute approximate proximity operators in the context of nuclear-norm regularization [9, 10], and
variants of Dykstra?s algorithm (and related dual methods) have been used to compute approximate
proximity operators in the context of overlapping group `1 -regularization [12, 14, 15].
It is known that proximal-gradient methods that use an approximate proximity operator converge under only weak assumptions [16, 17]; we briefly review this and other related work in the next section.
However, despite the many recent works showing impressive empirical performance of (accelerated)
proximal-gradient methods that use an approximate proximity operator [7, 13, 9, 10, 14, 15], up until
recently there was no theoretical analysis on how the error in the calculation of the proximity operator affects the convergence rate of proximal-gradient methods. In this work we show in several
contexts that, provided the error in the proximity operator calculation is controlled in an appropriate
way, inexact proximal-gradient strategies achieve the same convergence rates as the corresponding
exact methods. In particular, in Section 4 we first consider convex objectives and analyze the inexact
proximal-gradient (Proposition 1) and accelerated proximal-gradient (Proposition 2) methods. We
then analyze these two algorithms for strongly convex objectives (Proposition 3 and Proposition 4).
Note that in these analyses, we also consider the possibility that there is an error in the calculation of
the gradient of g. We then present an experimental comparison of various inexact proximal-gradient
strategies in the context of solving a structured sparsity problem (Section 5).
2
Related Work
The algorithm we shall focus on in this paper is the proximal-gradient method
xk = proxL [yk?1 ? (1/L)(g 0 (yk?1 ) + ek )] ,
(3)
where ek is the error in the calculation of the gradient and the proximity problem (2) is solved
inexactly so that xk has an error of ?k in terms of the proximal objective function (2). In the basic
proximal-gradient method we choose yk = xk , while in the accelerated proximal-gradient method
we choose yk = xk + ?k (xk ? xk?1 ), where the sequence (?k ) is chosen appropriately.
There is a substantial amount of work on methods that use an exact proximity operator but have an
error in the gradient calculation, corresponding to the special case where ?k = 0 but ek is non-zero.
For example, when the ek are independent, zero-mean, and finite-variance
random variables, then
?
proximal-gradient methods achieve the (optimal) error level of O(1/ k) [18, 19]. This is different
than the scenario we analyze in this paper since we do not assume unbiased nor independent errors,
but instead consider a sequence of errors converging to 0. This leads to faster convergence rates, and
makes our analysis applicable to the case of deterministic (and even adversarial) errors.
Several authors have recently analyzed the case of a fixed deterministic error in the gradient, and
shown that accelerated gradient methods achieve the optimal convergence rate up to some accuracy
that depends on the fixed error level [20, 21, 22], while the earlier work of [23] analyzes the gradient
method in the context of a fixed error level. This contrasts with our analysis, where by allowing
the error to change at every iteration we can achieve convergence to the optimal solution. Also, we
can tolerate a large error in early iterations when we are far from the solution, which may lead to
substantial computational gains. Other authors have analyzed the convergence rate of the gradient
and projected-gradient methods with a decreasing sequence of errors [24, 25], but this analysis does
not consider the important class of accelerated gradient methods. In contrast, the analysis of [22]
allows a decreasing sequence of errors (though convergence rates in this context are not explicitly
2
mentioned) and considers the accelerated projected-gradient method. However, the authors of this
work only consider the case of an exact projection step, and they assume the availability of an oracle
that yields global lower and upper bounds on the function. This non-intuitive oracle leads to a
novel analysis of smoothing methods, but leads to slower convergence rates than proximal-gradient
methods. The analysis of [21] considers errors in both the gradient and projection operators for
accelerated projected-gradient methods, but this analysis requires that the domain of the function is
compact. None of these works consider proximal-gradient methods.
In the context of proximal-point algorithms, there is a substantial literature on using inexact proximity operators with a decreasing sequence of errors, dating back to the seminal work of Rockafeller [26]. Accelerated proximal-point methods with a decreasing sequence of errors have also
been examined, beginning with [27]. However, unlike proximal-gradient methods where the proximity operator is only computed with respect to the non-smooth function h, proximal-point methods
require the calculation of the proximity operator with respect to the full objective function. In the
context of composite optimization problems of the form (1), this requires the calculation of the
proximity operator with respect to g + h. Since it ignores the structure of the problem, this proximity operator may be as difficult to compute (even approximately) as the minimizer of the original
problem.
Convergence of inexact proximal-gradient methods can be established with only weak assumptions
on the method used to approximately solve (2). For example, we can establish that inexact proximalgradient methods converge under some closedness assumptions on the mapping induced by the approximate proximity operator, and the assumption that the algorithm used to compute the inexact
proximity operator achieves sufficient descent on problem (2) compared to the previous iteration
xk?1 [16]. Convergence of inexact proximal-gradient methods can also be established under the
assumption that the norms of the errors are summable [17]. However, these prior works did not
consider the rate of convergence of inexact proximal-gradient methods, nor did they consider accelerated proximal-gradient methods. Indeed, the authors of [7] chose to use the non-accelerated variant of the proximal-gradient algorithm since even convergence of the accelerated proximal-gradient
method had not been established under an inexact proximity operator.
While preparing the final version of this work, [28] independently gave an analysis of the accelerated
proximal-gradient method with an inexact proximity operator and a decreasing sequence of errors
(assuming an exact gradient). Further, their analysis leads to a weaker dependence on the errors than
in our Proposition 2. However, while we only assume that the proximal problem can be solved up
to a certain accuracy, they make the much stronger assumption that the inexact proximity operator
yields an ?k -subdifferential of h [28, Definition 2.1]. Our analysis can be modified to give?an
improved dependence on the errors under this stronger assumption. In particular, the terms in ?i
ek and A
bk appearing in the propositions, leading to the
disappear from the expressions of Ak , A
optimal convergence rate with a slower decay of ?i . More details may be found in [29].
3
Notation and Assumptions
In this work, we assume that the smooth function g in (1) is convex and differentiable, and that its
gradient g 0 is Lipschitz-continuous with constant L, meaning that for all x and y in Rd we have
kg 0 (x) ? g 0 (y)k 6 Lkx ? yk .
This is a standard assumption in differentiable optimization, see [30, ?2.1.1]. If g is twicedifferentiable, this corresponds to the assumption that the eigenvalues of its Hessian are bounded
above by L. In Propositions 3 and 4 only, we will also assume that g is ?-strongly convex (see [30,
?2.1.3]), meaning that for all x and y in Rd we have
?
g(y) > g(x) + hg 0 (x), y ? xi + ||y ? x||2 .
2
In contrast to these assumptions on g, we will only assume that h in (1) is a lower semi-continuous
proper convex function (see [31, ?1.2]), but will not assume that h is differentiable or Lipschitzcontinuous. This allows h to be any real-valued convex function, but also allows for the possibility
that h is an extended real-valued convex function. For example, h could be the indicator function of
a convex set, and in this case the proximity operator becomes the projection operator.
3
We will use xk to denote the parameter vector at iteration k, and x? to denote a minimizer of f . We
assume that such an x? exists, but do not assume that it is unique. We use ek to denote the error
in the calculation of the gradient at iteration k, and we use ?k to denote the error in the proximal
objective function achieved by xk , meaning that
L
L
2
2
kxk ? yk + h(xk ) 6 ?k + min
kx ? yk + h(x) ,
(4)
2
2
x?Rd
where y = yk?1 ? (1/L)(g 0 (yk?1 ) + ek )). Note that the proximal optimization problem (2) is
strongly convex and in practice we are often able to obtain such bounds via a duality gap (e.g.,
see [12] for the case of overlapping group `1 -regularization).
4
Convergence Rates of Inexact Proximal-Gradient Methods
In this section we present the analysis of the convergence rates of inexact proximal-gradient methods as a function of the sequences of solution accuracies to the proximal problems (?k ), and the
sequences of magnitudes of the errors in the gradient calculations (kek k). We shall use (H) to
denote the set of four assumptions which will be made for each proposition:
?
?
?
?
g is convex and has L-Lipschitz-continuous gradient;
h is a lower semi-continuous proper convex function;
The function f = g + h attains its minimum at a certain x? ? Rn ;
xk is an ?k -optimal solution to the proximal problem (2) in the sense of (4).
We first consider the basic proximal-gradient method in the convex case:
Proposition 1 (Basic proximal-gradient method - Convexity) Assume (H) and that we iterate recursion (3) with yk = xk . Then, for all k > 1, we have
!
k
2
p
1X
L
f
,
(5)
kx0 ? x? k + 2Ak + 2Bk
xi ? f (x? ) 6
k i=1
2k
!
r
k
k
X
X
2?i
kei k
?i
with Ak =
+
, Bk =
.
L
L
L
i=1
i=1
The proof may be found in [29]. Note that while we have stated the proposition in terms of the
function value achieved by the average of the iterates, it trivially also holds for the iteration that
achieves the lowest function value. This result implies that the well-known O(1/k)
convergence
?
rate for the gradient method without errors still holds when both (kek k) and ( ?k ) are summable.
A sufficient condition to achieve this is that kek k decreases as O(1/k 1+? ) while ?k decreases as
0
O(1/k 2+? ) for any ?, ? 0 > 0. Note that a faster convergence of these two errors will not improve
the convergence rate, but will yield a better constant factor.
?
It is ?
interesting to consider what happens if (kek k) or ( ?k ) is not summable. For instance, if kek k
and ?k decrease as O(1/k), then Ak grows as O(log
2k)(note that Bk is always smaller than Ak )
and the convergence of the function values is in O logk k . Finally, a necessary condition to obtain
?
convergence is that the partial sums Ak and Bk need to be in o( k).
We now turn to the case of an accelerated proximal-gradient method. We focus on a basic variant of
the algorithm where ?k is set to (k ? 1)/(k + 2) [32, Eq. (19) and (27)]:
Proposition 2 (Accelerated proximal-gradient method - Convexity) Assume (H) and that we iterate recursion (3) with yk = xk + k?1
k+2 (xk ? xk?1 ). Then, for all k > 1, we have
2
q
2L
?
?
e
e
f (xk ) ? f (x ) 6
kx0 ? x k + 2Ak + 2Bk ,
(6)
(k + 1)2
!
r
k
k
X
X
kei k
2?i
i2 ?i
ek =
e
with Ak =
i
+
, B
.
L
L
L
i=1
i=1
4
?
In this case, we require the series (kkek k) and (k ?k ) to be summable to achieve the optimal
O(1/k 2 ) rate, which is an?(unsurprisingly) stronger constraint than in the basic case. A sufficient
condition is for kek k and ?k to decrease as O(1/k 2+? ) for any ? > 0. Note that, as opposed to
Proposition 1 that is stated for the average iterate, this bound is for the last iterate xk .
Again, it is?interesting to see what happens when the summability
? assumption is not met. First,
if kek k or ?k decreases at a rate of O(1/k 2 ), then k(kek k + ek ) decreases as O(1/k) and
ek grows as O(log k) (note that B
ek is always smaller than A
ek ), yielding a convergence rate of
A
2
?
log k
?
O k2
for f (xk ) ? f (x ). Also, and perhaps more interestingly, if kek k or ?k decreases at
a rate of O(1/k), Eq. (6) does not guarantee convergence of the function values. More generally,
ek and B
ek indicates that errors have a greater effect on the accelerated method than
the form of A
on the basic method. Hence, as also discussed in [22], unlike in the error-free case the accelerated
method may not necessarily be better than the basic method because it is more sensitive to errors in
the computation.
In the case where g is strongly convex it is possible to obtain linear convergence rates that depend
on the ratio ? = ?/L as opposed to the sublinear convergence rates discussed above. In particular,
we obtain the following convergence rate on the iterates of the basic proximal-gradient method:
Proposition 3 (Basic proximal-gradient method - Strong convexity) Assume (H), that g is ?strongly convex, and that we iterate recursion (3) with yk = xk . Then, for all k > 1, we have:
k
with
kxk ? x? k 6 (1 ? ?) (kx0 ? x? k + A?k ) ,
!
r
k
X
ke
k
2?
i
i
?i
A?k =
(1 ? ?)
+
.
L
L
i=1
(7)
A consequence of this proposition?is that we obtain a linear rate of convergence even in the presence
of errors, provided that kek k and ?k decrease linearly to 0. If they do so at a rate of Q0 < (1 ? ?),
then the convergence rate of kxk ?x? k is linear with constant (1 ? ?), as in the error-free algorithm.
0
If we have Q0 > (1 ? ?), then the convergence of kxk ? x? k is linear
with constantQ . If we have
k
k
Q0 = (1 ? ?), then kxk ?x? k converges to 0 as O(k (1 ? ?) ) = o [(1 ? ?) + ? 0 ] for all ? 0 > 0.
Finally, we consider the accelerated proximal-gradient algorithm
? strongly convex. We
? when g is
focus on a basic variant of the algorithm where ?k is set to (1 ? ?)/(1 + ?) [30, ?2.2.1]:
Proposition 4 (Accelerated proximal-gradient method - Strong convexity)
Assume (H), that g
?
1? ?
is ?-strongly convex, and that we iterate recursion (3) with yk = xk + 1+?? (xk ? xk?1 ). Then, for
all k > 1, we have
r
q 2
? k p
bk 2 + B
bk ,
f (xk ) ? f (x? ) 6 (1 ? ?)
2(f (x0 ) ? f (x? )) + A
(8)
?
bk =
with A
k
X
kei k +
p
? ?i/2
2L?i (1 ? ?)
,
i=1
bk =
B
k
X
?i (1 ?
?
?i
?)
.
i=1
Note that while we have stated the result in terms of function values, we obtain an analogous result
on the iterates because by strong convexity of f we have
?
||xk ? x? ||2 ? f (xk ) ? f (x? ).
2
This proposition implies that we obtain a linear rate of convergence in the presence ?
of errors provided that ||ek ||2?and ?k decrease linearly ?
to 0. If they do so at a rate Q0 < (1 ? ?), then the
constant is (1 ? ?), while if Q0 > (1 ? ?) then the constant will be Q0 . Thus, the accelerated
inexact proximal-gradient method will have a faster convergence rate than the exact basic proximalgradient method provided that Q0 < (1 ? ?). Oddly, in our analysis of the strongly convex case,
the accelerated method is less sensitive to errors than the basic method. However, unlike the basic
method, the accelerated method requires knowing ? in addition to L. If ? is misspecified, then the
convergence rate of the accelerated method may be slower than the basic method.
5
5
Experiments
We tested the basic inexact proximal-gradient and accelerated proximal-gradient methods on the
CUR-like factorization optimization problem introduced in [33] to approximate a given matrix W ,
min 12 kW ? W XW k2F + ?row
X
nr
X
i=1
||X i ||p + ?col
nc
X
||Xj ||p .
j=1
Under an appropriate choice of p, this optimization problem yields a matrix X with sparse rows
and sparse columns, meaning that entire rows and columns of the matrix X are set to exactly zero.
In [33], the authors used an accelerated proximal-gradient method and chose p = ? since under
this choice the proximity operator can be computed exactly. However, this has the undesirable effect
that it also encourages all values in the same row (or column) to have the same magnitude. The more
natural choice of p = 2 was not explored since in this case there is no known algorithm to exactly
compute the proximity operator.
Our experiments focused on the case of p = 2. In this case, it is possible to very quickly compute
an approximate proximity operator using the block coordinate descent (BCD) algorithm presented
in [12], which is equivalent to the proximal variant of Dykstra?s algorithm introduced by [34]. In our
implementation of the BCD method, we alternate between computing the proximity operator with
respect to the rows and to the columns. Since the BCD method allows us to compute a duality gap
when solving the proximal problem, we can run the method until the duality gap is below a given
error threshold ?k to find an xk+1 satisfying (4).
In our experiments, we used the four data sets examined by [33]1 and we choose ?row = .01 and
?col = .01, which yielded approximately 25?40% non-zero entries in X (depending on the data
set). Rather than assuming we are given the Lipschitz constant L, on the first iteration we set L to
1 and following [2] we double our estimate anytime g(xk ) > g(yk?1 ) + hg 0 (yk?1 ), xk ? yk?1 i +
(L/2)||xk ?yk?1 ||2 . We tested three different ways to terminate the approximate proximal problem,
each parameterized by a parameter ?:
? ?k = 1/k ? : Running the BCD algorithm until the duality gap is below 1/k ? .
? ?k = ?: Running the BCD algorithm until the duality gap is below ?.
? n = ?: Running the BCD algorithm for a fixed number of iterations ?.
Note that all three strategies lead to global convergence in the case of the basic proximal-gradient
method, the first two give a convergence rate up to some fixed optimality tolerance, and in this paper
we have shown that the first one (for large enough ?) yields a convergence rate for an arbitrary optimality tolerance. Note that the iterates produced by the BCD iterations are sparse, so we expected
the algorithms to spend the majority of their time solving the proximity problem. Thus, we used
the function value against the number of BCD iterations as a measure of performance. We plot the
results after 500 BCD iterations for the first two data sets for the proximal-gradient method in Figure 1, and the accelerated proximal-gradient method in Figure 2. The results for the other two data
sets are similar, and are included in [29]. In these plots, the first column varies ? using the choice
?k = 1/k ? , the second column varies ? using the choice ?k = ?, and the third column varies ?
using the choice n = ?. We also include one of the best methods from the first column in the second
and third columns as a reference.
In the context of proximal-gradient methods the choice of ?k = 1/k 3 , which is one choice that
achieves the fastest convergence rate according to our analysis, gives the best performance across
all four data sets. However, in these plots we also see that reasonable performance can be achieved
by any of the three strategies above provided that ? is chosen carefully. For example, choosing
n = 3 or choosing ?k = 10?6 both give reasonable performance. However, these are only empirical
observations for these data sets and they may be ineffective for other data sets or if we change the
number of iterations, while we have given theoretical justification for the choice ?k = 1/k 3 .
Similar trends are observed for the case of accelerated proximal-gradient methods, though the choice
of ?k = 1/k 3 (which no longer achieves the fastest convergence rate according to our analysis) no
longer dominates the other methods in the accelerated setting. For the SRBCT data set the choice
1
The datasets are freely available at http://www.gems-system.org.
6
0
0
10
0
10
?k = 1/k
10
n=1
n=2
n=3
n=5
?k = 1/k3
2
?k = 1/k
?k = 1/k3
?5
?k = 1/k4
10
?5
10
?k=1e?2
?k=1e?4
?k=1e?6
?k=1e?10
?5
10
?k = 1/k3
?k = 1/k5
?10
?10
10
?10
10
100
200
300
400
500
0
10
100
200
300
400
500
0
10
?k = 1/k2
?2
10
?k = 1/k3
?4
10
?k = 1/k4
100
200
300
10
n=1
n=2
n=3
n=5
?k = 1/k3
?k = 1/k
?6
?2
10
?4
10
?6
?k=1e?4
?2
10
?k=1e?6
?k=1e?10
?4
10
?k = 1/k3
?6
10
?8
10
?8
10
?8
10
?10
10
?10
10
?10
10
100
200
300
400
500
500
?k=1e?2
5
10
400
0
10
?k = 1/k
10
100
200
300
400
500
100
200
300
400
500
Figure 1: Objective function against number of proximal iterations for the proximal-gradient method
with different strategies for terminating the approximate proximity calculation. The top row is for
the 9 Tumors data, the bottom row is for the Brain Tumor1 data.
0
0
10
0
10
?k = 1/k
10
n=1
n=2
n=3
n=5
? = 1/k4
?k = 1/k2
3
?k = 1/k
?5
4
?k = 1/k
10
?5
10
k
?k=1e?2
?k=1e?4
?k=1e?6
?k=1e?10
?5
10
?k = 1/k4
5
?k = 1/k
?10
?10
10
?10
10
100
200
300
400
500
0
10
100
200
300
400
500
0
10
?k = 1/k2
?2
10
?4
10
4
?k = 1/k
200
300
10
n=1
n=2
n=3
n=5
? = 1/k4
?2
10
3
?k = 1/k
100
?4
10
k
?k=1e?4
?2
10
?k=1e?6
? =1e?10
?4
10
k
? = 1/k4
?6
k
10
?8
10
?8
?8
10
?10
10
?10
10
?10
10
100
200
300
400
500
k
?6
10
10
500
?k=1e?2
5
? = 1/k
?6
400
0
10
?k = 1/k
10
100
200
300
400
500
100
200
300
400
Figure 2: Objective function against number of proximal iterations for the accelerated proximalgradient method with different strategies for terminating the approximate proximity calculation.
The top row is for the 9 Tumors data, the bottom row is for the Brain Tumor1 data.
?k = 1/k 4 , which is a choice that achieves the fastest convergence rate up to a poly-logarithmic
factor, yields better performance than ?k = 1/k 3 . Interestingly, the only choice that yields the
fastest possible convergence rate (?k = 1/k 5 ) had reasonable performance but did not give the best
performance on any data set. This seems to reflect the trade-off between performing inner BCD
iterations to achieve a small duality gap and performing outer gradient iterations to decrease the
value of f . Also, the constant terms which were not taken into account in the analysis do play an
important role here, due to the relatively small number of outer iterations performed.
7
500
6
Discussion
An alternative to inexact proximal methods for solving structured sparsity problems are smoothing
methods [35] and alternating direction methods [36]. However, a major disadvantage of both these
approaches is that the iterates are not sparse, so they can not take advantage of the sparsity of the
problem when running the algorithm. In contrast, the method proposed in this paper has the appealing property that it tends to generate sparse iterates. Further, the accelerated smoothing method
only has a convergence rate of O(1/k), and the performance of alternating direction methods is
often sensitive to the exact choice of their penalty parameter. On the other hand, while our analysis
suggests using a sequence of errors like O(1/k ? ) for ? large enough, the practical performance of
inexact proximal-gradients methods will be sensitive to the exact choice of this sequence.
Although we have illustrated the use of our results in the context of a structured sparsity problem,
inexact proximal-gradient methods are also used in other applications such as total-variation [7, 8]
and nuclear-norm [9, 10] regularization. This work provides a theoretical justification for using
inexact proximal-gradient methods in these and other applications, and suggests some guidelines
for practioners that do not want to lose the appealing convergence rates of these methods. Further,
although our experiments and much of our discussion focus on errors in the calculation of the proximity operator, our analysis also allows for an error in the calculation of the gradient. This may also
be useful in a variety of contexts. For example, errors in the calculation of the gradient arise when
fitting undirected graphical models and using an iterative method to approximate the gradient of the
log-partition function [37]. Other examples include using a reduced set of training examples within
kernel methods [38] or subsampling to solve semidefinite programming problems [39].
In our analysis, we assume that the smoothness constant L is known, but it would be interesting to
extend methods for estimating L in the exact case [2] to the case of inexact algorithms. In the context
of accelerated methods for strongly convex optimization, our analysis also assumes that ? is known,
and it would be interesting to explore variants that do not make this assumption. We also note that
if the basic proximal-gradient method is given knowledge of ?, then our analysis can be modified
to obtain a faster linear convergence rate of (1 ? ?)/(1 + ?) instead of (1 ? ?) for strongly-convex
optimization using a step size of 2/(? + L), see Theorem 2.1.15 of [30]. Finally, we note that there
has been recent interest in inexact proximal Newton-like methods [40], and it would be interesting
to analyze the effect of errors on the convergence rates of these methods.
Acknowledgements Mark Schmidt, Nicolas Le Roux, and Francis Bach are supported by the
European Research Council (SIERRA-ERC-239993).
References
[1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[2] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE Discussion Papers,
(2007/76), 2007.
[3] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society:
Series B, 58(1):267?288, 1996.
[4] S.S. Chen, D.L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on
Scientific Computing, 20(1):33?61, 1998.
[5] S.J. Wright, R.D. Nowak, and M.A.T. Figueiredo. Sparse reconstruction by separable approximation.
IEEE Transactions on Signal Processing, 57(7):2479?2493, 2009.
[6] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex optimization with sparsity-inducing norms. In
S. Sra, S. Nowozin, and S.J. Wright, editors, Optimization for Machine Learning. MIT Press, 2011.
[7] J. Fadili and G. Peyr?e. Total variation projection with first order schemes. IEEE Transactions on Image
Processing, 20(3):657?669, 2011.
[8] X. Chen, S. Kim, Q. Lin, J.G. Carbonell, and E.P. Xing. Graph-structured multi-task regression and an
efficient optimization method for general fused Lasso. arXiv:1005.3579v1, 2010.
[9] J.-F. Cai, E.J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM
Journal on Optimization, 20(4), 2010.
[10] S. Ma, D. Goldfarb, and L. Chen. Fixed point and Bregman iterative methods for matrix rank minimization. Mathematical Programming, 128(1):321?353, 2011.
8
[11] L. Jacob, G. Obozinski, and J.-P. Vert. Group Lasso with overlap and graph Lasso. ICML, 2009.
[12] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary
learning. JMLR, 12:2297?2334, 2011.
[13] A. Barbero and S. Sra. Fast Newton-type methods for total variation regularization. ICML, 2011.
[14] J. Liu and J. Ye. Fast overlapping group Lasso. arXiv:1009.0306v1, 2010.
[15] M. Schmidt and K. Murphy. Convex structure learning in log-linear models: Beyond pairwise potentials.
AISTATS, 2010.
[16] M. Patriksson. A unified framework of descent algorithms for nonlinear programs and variational inequalities. PhD thesis, Department of Mathematics, Link?oping University, Sweden, 1995.
[17] P.L. Combettes. Solving monotone inclusions via compositions of nonexpansive averaged operators.
Optimization, 53(5-6):475?504, 2004.
[18] J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. JMLR,
10:2873?2898, 2009.
[19] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. JMLR, 10:777?801, 2009.
[20] A. d?Aspremont. Smooth optimization with approximate gradient. SIAM Journal on Optimization,
19(3):1171?1183, 2008.
[21] M. Baes. Estimate sequence methods: extensions and approximations. IFOR internal report, ETH Zurich,
2009.
[22] O. Devolder, F. Glineur, and Y. Nesterov. First-order methods of smooth convex optimization with inexact
oracle. CORE Discussion Papers, (2011/02), 2011.
[23] A. Nedic and D. Bertsekas. Convergence rate of incremental subgradient algorithms. Stochastic Optimization: Algorithms and Applications, pages 263?304, 2000.
[24] Z.-Q. Luo and P. Tseng. Error bounds and convergence analysis of feasible descent methods: A general
approach. Annals of Operations Research, 46-47(1):157?178, 1993.
[25] M.P. Friedlander and M. Schmidt.
arXiv:1104.2373, 2011.
Hybrid deterministic-stochastic methods for data fitting.
[26] R.T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM Journal on Control and
Optimization, 14(5):877?898, 1976.
[27] O. G?uler. New proximal point algorithms for convex minimization. SIAM Journal on Optimization,
2(4):649?664, 1992.
[28] S. Villa, S. Salzo, L. Baldassarre, and A. Verri. Accelerated and inexact forward-backward algorithms.
Optimization Online, 2011.
[29] M. Schmidt, N. Le Roux, and F. Bach. Convergence rates of inexact proximal-gradient methods for
convex optimization. arXiv:1109.2415v2, 2011.
[30] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer, 2004.
[31] D.P. Bertsekas. Convex optimization theory. Athena Scientific, 2009.
[32] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization, 2008.
[33] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Convex and network flow optimization for structured
sparsity. JMLR, 12:2681?2720, 2011.
[34] H.H. Bauschke and P.L. Combettes. A Dykstra-like algorithm for two monotone operators. Pacific Journal
of Optimization, 4(3):383?391, 2008.
[35] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Prog., 103(1):127?152, 2005.
[36] P.L. Combettes and J.-C. Pesquet. Proximal splitting methods in signal processing. In H.H. Bauschke,
R.S. Burachik, P.L. Combettes, V. Elser, D.R. Luke, and H. Wolkowicz, editors, Fixed-Point Algorithms
for Inverse Problems in Science and Engineering, pages 185?212. Springer, 2011.
[37] M.J. Wainwright, T.S. Jaakkola, and A.S. Willsky. Tree-reweighted belief propagation algorithms and
approximate ML estimation by pseudo-moment matching. AISTATS, 2003.
[38] J. Kivinen, A.J. Smola, and R.C. Williamson. Online learning with kernels. IEEE Transactions on Signal
Processing, 52(8):2165?2176, 2004.
[39] A. d?Aspremont. Subsampling algorithms for semidefinite programming. arXiv:0803.1990v5, 2009.
[40] M. Schmidt, D. Kim, and S. Sra. Projected Newton-type methods in machine learning. In S. Sra,
S. Nowozin, and S. Wright, editors, Optimization for Machine Learning. MIT Press, 2011.
9
| 4452 |@word version:1 briefly:1 stronger:3 seems:2 norm:6 decomposition:1 jacob:1 moment:1 liu:1 series:2 ecole:1 interestingly:2 kx0:3 luo:1 partition:1 analytic:2 plot:3 xk:29 beginning:1 core:2 iterates:6 provides:1 math:1 org:1 zhang:1 mathematical:1 become:1 fitting:2 introductory:1 pairwise:1 x0:1 expected:1 indeed:1 cand:1 nor:2 multi:1 brain:2 decreasing:5 becomes:1 project:1 provided:6 notation:1 bounded:1 estimating:1 elser:1 lowest:1 what:2 kg:1 developed:1 unified:1 guarantee:1 pseudo:1 every:1 concave:1 exactly:4 k2:4 control:1 bertsekas:2 engineering:1 tends:1 consequence:1 despite:2 ak:8 approximately:3 inria:2 chose:2 studied:1 examined:2 suggests:2 luke:1 fastest:4 factorization:1 averaged:1 practical:2 unique:1 atomic:1 practice:2 block:1 empirical:2 eth:1 composite:3 projection:4 matching:1 vert:1 patriksson:1 undesirable:1 selection:1 operator:34 context:13 seminal:1 www:1 equivalent:1 deterministic:3 fadili:1 independently:1 convex:34 focused:1 ke:1 shen:1 roux:4 splitting:2 nuclear:3 variation:5 coordinate:1 analogous:1 justification:2 annals:1 play:1 exact:9 programming:3 trend:1 expensive:1 particularly:1 satisfying:1 kxk1:1 observed:1 bottom:2 role:1 solved:2 decrease:11 trade:1 yk:17 substantial:3 mentioned:1 convexity:5 nesterov:4 terminating:2 depend:1 solving:6 basis:1 various:1 fast:4 choosing:2 saunders:1 spend:1 solve:2 valued:2 final:1 online:4 advantage:4 sequence:12 differentiable:3 eigenvalue:1 cai:1 reconstruction:1 fr:2 achieve:11 intuitive:1 inducing:1 convergence:49 double:1 incremental:1 converges:1 sierra:2 uler:1 depending:1 completion:1 eq:2 strong:4 implies:2 met:1 direction:2 guided:1 stochastic:2 require:2 generalization:1 proposition:16 extension:1 hold:2 proximity:33 wright:3 k3:6 mapping:1 major:1 achieves:5 early:1 dictionary:1 baldassarre:1 estimation:1 applicable:1 lose:1 sensitive:4 council:1 minimization:3 mit:2 always:2 modified:2 normale:1 rather:1 shrinkage:2 jaakkola:1 focus:4 rank:2 indicates:1 contrast:4 adversarial:1 attains:1 kim:2 sense:1 entire:1 arg:1 among:1 dual:2 smoothing:3 special:1 preparing:1 kw:1 k2f:1 icml:2 nonsmooth:1 report:1 murphy:1 beck:1 interest:1 possibility:2 analyzed:2 yielding:1 semidefinite:2 regularizers:2 hg:2 bregman:1 nowak:1 partial:1 necessary:1 sweden:1 intense:1 tree:1 theoretical:4 instance:2 column:9 earlier:1 teboulle:1 disadvantage:1 entry:1 peyr:1 oping:1 closedness:1 bauschke:2 varies:3 proximal:77 siam:6 off:1 salzo:1 fused:2 quickly:1 again:1 reflect:1 thesis:1 opposed:2 choose:3 summable:4 ek:15 leading:1 li:1 account:1 potential:1 includes:1 availability:1 rockafellar:1 notable:1 explicitly:1 depends:1 performed:1 analyze:4 francis:3 sup:1 xing:1 minimize:2 square:1 accuracy:3 variance:1 kek:10 efficiently:1 yield:7 weak:2 produced:1 none:1 definition:1 inexact:26 against:3 proof:1 cur:1 gain:1 wolkowicz:1 anytime:1 knowledge:1 proximalgradient:4 carefully:2 jenatton:3 back:1 tolerate:1 improved:1 verri:1 formulation:1 though:2 strongly:10 smola:1 until:4 langford:1 hand:1 nonlinear:1 overlapping:4 propagation:1 perhaps:1 scientific:2 grows:2 name:1 effect:3 ye:1 true:1 unbiased:1 regularization:11 hence:1 alternating:2 q0:7 goldfarb:1 i2:1 illustrated:1 reweighted:1 encourages:1 duchi:1 meaning:4 image:1 variational:1 novel:1 recently:2 misspecified:1 discussed:2 extend:1 composition:1 smoothness:1 rd:6 erieure:1 trivially:1 mathematics:1 erc:1 inclusion:1 had:2 impressive:1 yk2:1 lkx:1 longer:2 practioners:1 recent:3 optimizing:1 scenario:2 certain:2 inequality:1 analyzes:1 minimum:1 greater:1 freely:1 converge:2 signal:3 semi:2 full:1 smooth:17 faster:4 calculation:16 bach:7 lin:1 controlled:1 kax:1 variant:6 basic:19 converging:1 regression:2 arxiv:5 iteration:19 kernel:2 achieved:3 subdifferential:1 addition:1 want:1 singular:2 crucial:1 appropriately:1 unlike:3 ineffective:1 induced:1 undirected:1 flow:1 presence:2 enough:2 iterate:6 affect:1 xj:1 gave:1 variety:1 pesquet:1 lasso:6 reduce:1 inner:1 knowing:1 expression:1 penalty:1 hessian:1 generally:1 useful:1 amount:1 ifor:1 reduced:1 http:1 generate:1 disjoint:1 tibshirani:1 shall:2 devolder:1 group:7 four:3 threshold:1 k4:6 backward:2 v1:2 imaging:1 graph:3 subgradient:2 monotone:3 sum:2 year:1 run:1 inverse:2 parameterized:1 prog:1 reasonable:3 bound:4 oracle:3 yielded:1 burachik:1 constraint:1 bcd:10 barbero:1 min:3 optimality:2 performing:2 separable:1 relatively:1 structured:6 department:1 according:2 alternate:1 pacific:1 nonexpansive:1 smaller:2 across:1 appealing:3 happens:2 taken:1 zurich:1 turn:1 singer:1 available:1 pursuit:1 operation:1 k5:1 hierarchical:1 v2:1 appropriate:3 appearing:1 schmidt:7 alternative:1 batch:1 slower:3 original:1 top:2 running:4 include:2 subsampling:2 assumes:1 graphical:1 newton:4 xw:1 establish:1 disappear:1 classical:1 dykstra:3 society:1 objective:9 v5:1 strategy:6 dependence:2 nr:1 villa:1 gradient:86 subspace:1 link:1 majority:1 outer:2 athena:1 carbonell:1 topic:1 considers:2 tseng:2 willsky:1 assuming:2 ratio:1 minimizing:1 nc:1 difficult:1 glineur:1 stated:3 implementation:1 guideline:1 proper:2 perform:1 allowing:1 upper:1 observation:1 datasets:1 finite:1 descent:4 truncated:1 extended:1 team:1 srbct:1 rn:1 arbitrary:1 community:1 bk:10 introduced:2 paris:1 established:3 address:1 able:1 beyond:1 krylov:1 below:3 sparsity:7 program:1 including:1 royal:1 belief:1 wainwright:1 overlap:1 difficulty:1 natural:1 regularized:1 hybrid:1 indicator:1 kivinen:1 recursion:4 nedic:1 scheme:1 improve:1 aspremont:2 dating:1 review:1 literature:1 prior:1 acknowledgement:1 friedlander:1 unsurprisingly:1 summability:1 lecture:1 sublinear:1 interesting:5 sufficient:3 thresholding:2 bk2:1 editor:3 nowozin:2 row:10 course:1 supported:1 last:1 free:3 figueiredo:1 weaker:1 oddly:1 taking:3 sparse:8 tolerance:2 lipschitzcontinuous:1 ignores:1 author:5 made:1 forward:2 projected:5 kei:3 far:1 transaction:3 approximate:15 compact:1 ml:1 global:2 mairal:3 gem:1 xi:2 continuous:4 iterative:3 terminate:1 nicolas:3 sra:4 obtaining:1 williamson:1 necessarily:1 poly:1 european:1 domain:1 did:3 aistats:2 linearly:2 arise:2 en:1 combettes:4 col:2 jmlr:4 third:2 theorem:1 showing:1 explored:1 decay:1 dominates:1 exists:1 importance:1 logk:1 magnitude:2 phd:1 kx:2 gap:6 chen:3 logarithmic:1 explore:1 kxk:5 springer:2 futher:1 corresponds:1 minimizer:2 inexactly:1 ma:1 obozinski:4 donoho:1 lipschitz:4 feasible:1 change:2 included:1 tumor:2 total:5 duality:6 experimental:1 e:1 internal:1 mark:3 accelerated:38 tested:2 |
3,814 | 4,453 | Statistical Performance of Convex Tensor
Decomposition
Ryota Tomioka?
Taiji Suzuki?
Department of Mathematical Informatics,
The University of Tokyo
Tokyo 113-8656, Japan
[email protected]
[email protected]
Kohei Hayashi?
Graduate School of Information Science,
Nara Institute of Science and Technology
Nara 630-0192, Japan
[email protected]
?
?
Hisashi Kashima?,?
Basic Research Programs PRESTO,
Synthesis of Knowledge for Information Oriented Society, JST
Tokyo 102-8666, Japan
[email protected]
?
Abstract
We analyze the statistical performance of a recently proposed convex tensor decomposition algorithm. Conventionally tensor decomposition has been formulated as non-convex optimization problems, which hindered the analysis of their
performance. We show under some conditions that the mean squared error of
the convex method scales linearly with the quantity we call the normalized rank
of the true tensor. The current analysis naturally extends the analysis of convex
low-rank matrix estimation to tensors. Furthermore, we show through numerical
experiments that our theory can precisely predict the scaling behaviour in practice.
1 Introduction
Tensors (multi-way arrays) generalize matrices and naturally represent data having more than two
modalities. For example, multi-variate time-series, for instance, electroencephalography (EEG),
recorded from multiple subjects under various conditions naturally form a tensor. Moreover, in
collaborative ?ltering, users? preferences on products, conventionally represented as a matrix, can
be represented as a tensor when the preferences change over time or context.
For the analysis of tensor data, various models and methods for the low-rank decomposition of
tensors have been proposed (see Kolda & Bader [12] for a recent survey). These techniques have
recently become increasingly popular in data-mining [1, 14] and computer vision [25, 26]. Besides
they have proven useful in chemometrics [4], psychometrics [24], and signal processing [20, 7, 8].
Despite empirical success, the statistical performance of tensor decomposition algorithms has not
been fully elucidated. The dif?culty lies in the non-convexity of the conventional tensor decomposition algorithms (e.g., alternating least squares [6]). In addition, studies have revealed many
discrepancies (see [12]) between matrix rank and tensor rank, which make extension of studies on
the performance of low-rank matrix models (e.g., [9]) challenging.
Recently, several authors [21, 10, 13, 23] have focused on the notion of tensor mode-k rank (instead
of tensor rank), which is related to the Tucker decomposition [24]. They discovered that regularized
estimation based on the Schatten 1-norm, which is a popular technique for recovering low-rank
matrices via convex optimization, can also be applied to tensor decomposition. In particular, the
1
Convex
Tucker (exact)
Optimization tolerance
0
10
?3
10
0
0.2
0.4
0.6
0.8
Fraction of observed elements
1
Figure 1: Result of estimation of rank-(7, 8, 9) tensor of dimensions
50???? 50 ? 20 from partial
???
? ? W ? ??? is plotted against the
measurements; see [23] for the details. The estimation error ???W
F
fraction of observed elements m = M/N . Error bars over 10 repetitions are also shown. Convex
refers to the convex tensor decomposition based on the minimization problem (7). Tucker (exact)
refers to the conventional (non-convex) Tucker decomposition [24] at the correct rank. Gray dashed
line shows the optimization tolerance 10?3 . The question is how we can predict the point where the
generalization begins (roughly m = 0.35 in this plot).
study in [23] showed that there is a clear transition at certain number of samples where the error
drops dramatically from no generalization to perfect generalization (see Figure 1).
In this paper, motivated by the above recent work, we mathematically analyze the performance of
convex tensor decomposition. The new convex formulation for tensor decomposition allows us to
generalize recent results on Schatten 1-norm-regularized estimation of matrices (see [17, 18, 5, 19]).
Under a general setting we show how the estimation error scales with the mode-k ranks of the true
tensor. Furthermore, we analyze the speci?c settings of (i) noisy tensor decomposition and (ii)
random Gaussian design. In the ?rst setting, we assume that all the elements of a low-rank tensor
is observed with noise and the goal is to recover the underlying low-rank structure. This is the most
common setting a tensor decomposition algorithm is used. In the second setting, we assume that
the unknown tensor is a coef?cient of a tensor-input scalar-output regression problem and the input
tensors (design) are randomly given from independent Gaussian distributions. Surprisingly, it turns
out that the random Gaussian setting can precisely predict the phase-transition-like behaviour in
Figure 1. To the best of our knowledge, this is the ?rst paper that rigorously studies the performance
of a tensor decomposition algorithm.
2
Notation
In this section, we introduce the notations we use in this paper. Moreover, we introduce a H?olderlike inequality (3) and the notion of mode-k decomposability (5), which play central roles in our
analysis.
QK
Let X ? Rn1 ????nK be a K-way tensor. We denote the number of elements in X by N = k=1 nk .
?
The inner product between two tensors ?W, X ? is de?ned as ?W, X ? = vec(W)
p ), where
??? ??? vec(X
vec is a vectorization. In addition, we de?ne the Frobenius norm of a tensor ???X ???F = ?X , X ?.
Q
The mode-k unfolding X (k) is the nk ? n
? \k (?
n\k := k? ?=k nk? ) matrix obtained by concatenating
the mode-k ?bers (the vectors obtained by ?xing every index of X but the kth index) of X as column
vectors. The mode-k rank of a tensor X , denoted by rankk (X ), is the rank of the mode-k unfolding
X (k) (as a matrix). Note that when K = 2 and X is actually a matrix, and X (2) = X (1) ? . We say
a tensor X is rank (r1 , . . . , rK ) when rk = rankk (X ) for k = 1, . . . , K. Note that the mode-k rank
can be computed in a polynomial time, because it boils down to computing a matrix rank, whereas
computing tensor rank is NP complete [11]. See [12] for more details.
Since for each k, the convex envelope of the mode-k rank is given as the Schatten 1-norm [18]
(known as the trace norm [22] or the nuclear norm [3]), it is natural to consider the following
2
??? ???
overlapped Schatten 1-norm ???W ???S of a tensor W ? Rn1 ?????nK (see also [21]):
1
??? ???
???W ???
S1
=
K
?
1 X?
?W (k) ? ,
S1
K
(1)
k=1
where W (k) is the mode-k unfolding of W. Here ? ? ?S1 is the Schatten 1-norm for a matrix
Xr
?W ?S1 =
?j (W ),
j=1
where ?j (W ) is the jth largest singular-value of W . The dual norm of the Schatten 1-norm is the
Schatten ?-norm (known as the spectral norm) as follows:
?X?S? = max ?j (X).
j=1,...,r
Since the two norms ? ? ?S1 and ? ? ?S? are dual to each other, we have the following inequality:
|?W , X?| ? ?W ?S1 ?X?S? ,
(2)
where ?W , X? is the inner product of W and X.
The same inequality holds for the overlapped Schatten 1-norm (1) and its dual norm. The dual norm
of the overlapped Schatten 1-norm can be characterized by the following lemma.
??? ???
Lemma 1. The dual norm of the overlapped Schatten 1-norm denoted as ???????S ? is de?ned as the
1
in?mum of the maximum mode-k spectral norm over the tensors whose average equals the given
tensor X as follows:
??? ???
(k)
???X ??? ? =
max ?Y (k) ?S? ,
inf
S1
1
(1) +Y (2) +???+Y (K) =X
k=1,...,K
Y
(
)
K
(k)
where Y (k) is the mode-k unfolding of Y (k) . Moreover, the following upper bound on the dual norm
??? ???
??????? ? is valid:
S1
??? ???
???X ???
S1?
??? ???
1 XK
?X (k) ?S? .
? ???X ???mean :=
k=1
K
??? ???
Proof. The ?rst part can be shown by solving the dual of the maximization problem ???X ???S ? :=
1
??? ???
sup ?W, X ? s.t. ???W ???S1 ? 1. The second part is obtained by setting Y (k) = PK K1/c ? X /ck ,
where ck = ?X (k) ?S? , and using Jensen?s inequality.
k? =1
k
According to Lemma 1, we have the ??following
? ??? ??? H?
?o??lder-like
??? inequality
??? ??? ???
|?W, X ?| ? ???W ???S1 ???X ???S ? ? ???W ???S1 ???X ???mean .
(3)
??? ??? ??? ???
Note that the above bound is tighter than the more intuitive relation | ?W, X ? | ? ???W ???S ???X ???S
1
?
??? ???
(???X ???S? := max1,...,K ?X (k) ?S? ), which one might come up as an analogy to the matrix case (2).
1
Finally, let W ? ? Rn1 ?????nK be the low-rank tensor that we wish to recover. We assume that W ?
is rank (r1 , . . . , rK ). Thus, for each k we have
W ?(k) = U k S k V k
(k = 1, . . . , K),
where U k ? Rnk ?rk and V k ? Rn? \k ?rk are orthogonal, and S k ? Rrk ?rk is diagonal. Let
? ? Rn1 ?????nK be an arbitrary tensor. We de?ne the mode-k orthogonal complement ???k of an
unfolding ?(k) ? Rnk ??n\k of ? with respect to the true low-rank tensor W ? as follows:
??k
???k = (I nk ? U k U k ? )?(k) (I n? \k ? V k V k ? ).
(4)
:= ?(k) ? ???k is
the true tensor W ?(k) .
In addition
the component having overlapped row/column space with the
unfolding of
Note that the decomposition ?(k) = ??k + ???k is de?ned for
each mode; thus we use subscript k instead of (k).
Using the decomposition de?ned above we have the following equality, which we call mode-k decomposability of the Schatten 1-norm:
?W ?(k) + ???k ?S1 = ?W ?(k) ?S1 + ????k ?S1 (k = 1, . . . , K).
(5)
The above decomposition is de?ned for each mode and thus it is weaker than the notion of decomposability discussed by Negahban et al. [15].
3
3
Theory
In this section, we ?rst present a deterministic result that holds under a certain choice of regularization constant ?M and an assumption called the restricted strong convexity. Then, we focus on
special cases to justify the choice of regularization constant and the restricted strong convexity assumption. We analyze the setting of (i) noisy tensor decomposition and (ii) random Gaussian design
in Section 3.2 and Section 3.3, respectively.
3.1
Main result
Our goal is to estimate an unknown rank (r1 , . . . , rK ) tensor W ? ? Rn1 ????nK from observations
yi = ?Xi , W ? ? + ?i (i = 1, . . . , M ).
(6)
Here the noise ?i follows the independent zero-mean Gaussian distribution with variance ? 2 .
We employ the regularized empirical risk minimization problem proposed in [21, 10, 13, 23] for the
estimation of W as follows:
??? ???
1
minimize
?y ? X(W)?22 + ?M ???W ???S1 ,
(7)
n
?????n
1
K
2M
W?R
where y = (y1 , . . . , yM )? is the collection of observations; X : Rn1 ?????nK ? RM is a linear
operator that maps W to the M dimensional output vector X(W) = (?X1 , W? , . . . , ?XM , W?) ? ?
RM . The Schatten 1-norm term penalizes every mode of W to be jointly low-rank (see Equation (1));
?M > 0 is the regularization constant. Accordingly, the solution of the minimization problem (7) is
typically a low-rank tensor when ?M is suf?ciently large. In addition, we denote the adjoint operator
PM
of X as X? : RM ? Rn1 ?????nK ; that is X? (?) = i=1 ?i Xi ? Rn1 ?????nK .
? ? W?
The ?rst step in our analysis is to characterize the particularity of the residual tensor ? := W
as in the following lemma.
???
???
? be the solution of the minimization problem (7) with ?M ? 2???X? (?)???
Lemma 2. Let W
/M ,
mean
? ? W ? , where W ? is the true low-rank tensor. Let ?(k) = ?? + ??? be the
and let ? := W
k
k
decomposition de?ned in Equation (4). Then we have the following inequalities:
1. rank(??k ) ? 2rk for each k = 1, . . . , K.
PK
PK
?
??
2.
k=1 ??k ?S1 .
k=1 ??k ?S1 ? 3
Proof. The proof uses the mode-k decomposability (5) and is analogous to that of Lemma 1 in
[17].
The second ingredient of our analysis is the restricted strong convexity. Although, ?strong? may
sound like a strong assumption, the point is that we require this assumption to hold only for the
particular residual tensor we characterized in Lemma 2. The assumption can be stated as follows.
Assumption 1 (Restricted strong convexity). We suppose that there is a positive constant ?(X) such
that the operator X satis?es the inequality
??? ???2
1
?X(?)?22 ??(X)???????F ,
(8)
M
PK
for all ? ? Rn1 ?????nK such that for each k = 1, . . . , K, rank(??k ) ? 2rk and k=1 ????k ?S1 ?
PK
3 k=1 ???k ?S1 , where ??k and ???k are de?ned through the decomposition (4).
Now using the above two ingredients, we are ready to prove the following deterministic guarantee
on the performance of the estimation procedure (7).
???
???
? be the solution of the minimization problem (7) with ?M ? 2???X? (?)???
Theorem 1. Let W
/M .
mean
Suppose that the operator X satis?es the restricted strong convexity condition. Then the following
bound is true:
PK ?
???
???
? ? W ? ??? ? 32?M k=1 rk .
???W
(9)
F
?(X)K
4
? ? W ? . Combining the fact that the objective value for W
?
Proof. Let ? = W
??? ? ???
??? is??smaller
?
??than
? ??? that for
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
W , the H?older-like inequality (3), the triangular inequality W S ? W S ? ?????S , and
1
1
1
???
???
the assumption ???X? (?)/M ???
? ?M /2, we obtain
mean
??? ???
???
???
??? ???
??? ???
1
(10)
?X(?)?22 ? ???X? (?)/M ???mean ???????S1 + ?M ???????S1 ? 2?M ???????S1 .
2M
Now the left-hand side can be lower-bounded using the restricted strong convexity (8). On the other
hand, using Lemma 2, the right-hand side can be upper-bounded as follows:
??? ???
??? ???
??? ???
?
??????? ? 1 PK (???k ?S1 + ????k ?S1 ) ? 4 PK ???k ?S1 ? 4 ? F PK
2rk , (11)
k=1
k=1
k=1
K
K
K
S1
??? ???
where the last inequality follows because ???????F = ??(k) ?F for k = 1, . . . , K. Combining inequalities (8), (10), and (11), we obtain our claim (9).
Negahban et al. [15] (see also [17]) pointed out that the key properties for establishing a sharp convergence result for a regularized M -estimator is the decomposability of the regularizer and the restricted strong convexity. What we have shown suggests that the weaker mode-k decomposability (5)
suf?ce to obtain the above convergence result for the overlapped Schatten 1-norm (1) regularization.
3.2 Noisy Tensor Decomposition
In this subsection, we consider the setting where all the elements are observed (with noise) and the
goal is to recover the underlying low-rank tensor without noise.
Since all the elements are observed only once, X is simply a vectorization
(M =
???
??? N ), and the left2
? ? W ? ??? . Therefore, the
?
?
?
=
W
hand side of inequality (10)???gives the
quantity
of
interest
?X(?)?
2
F
???
remaining task is to bound ???X? (?)???mean as in the following lemma.
Lemma 3. Suppose
???
?that
?? X : n1 ?? ? ??nK ? N is a vectorization of a tensor. With high probability
the quantity ???X? (?)???mean is concentrated around its mean, which can be bounded as follows:
K
???
???
?
p
? X ??
E???X? (?)???mean ?
nk + n
? \k .
K
(12)
k=1
???
???
Setting the regularization constant as ?M = c0 E???X? (?)???mean /N , we obtain the following theorem.
Theorem 2. Suppose that X : n1 ?? ? ??nK ? N is a vectorization of a tensor. There are universal
constants c0 and c1 , such that, with high probability, any solution of the minimization problem (7)
PK ?
p
with regularization constant ?M = c0 ? k=1 ( nk + n
? \k )/(KN ) satis?es the following bound:
?
!2 ?
!2
K
K
X
X
???
???2
??
?
p
?
1
1
?
2
? ? W ??? ? c1 ?
???W
nk + n
? \k
rk .
F
K
K
k=1
k=1
Proof. Combining Equations (10)?(11) with the fact that X is simply a vectorization and M = N ,
we have
?
1
? ? W ? ?F ? 16 2?M PK ?rk .
?W
N
K
k=1
Substituting the choice of regularization constant ?M and squaring both sides, we obtain our claim.?
We can simplify the result of Theorem 2 by noting that n
? \k = N/nk ? nk , when the dimenPK ? 2
1
sions are of the same order. Introducing the notation ?r?1/2 = ( K
rk ) and n?1 :=
k=1
(1/n1 , . . . , 1/nK ), we have
???
???
? ? W ? ???2
???W
?
?
F
? Op ? 2 ?n?1 ?1/2 ?r?1/2 .
(13)
N
We call the quantity r? = ?n?1 ?1/2 ?r?1/2 the normalized rank, because r? = r/n when the dimensions are balanced (nk = n and rk = r for all k = 1, . . . , K).
5
3.3
Random Gaussian Design
In this subsection, we consider the case the elements of the input tensors Xi (i = 1, . . . , M ) in the
observation model (6) are distributed according to independent identical standard Gaussian distributions. We call this setting random Gaussian design.
???
???
First we show an upper bound on the norm ???X? (?)???mean , which we use to specify the scaling of
the regularization constant ?M in Theorem 1.
Lemma 4. Let X : Rn1 ?????nK ? RM be a random Gaussian design. In addition, we assume
that
?i is sampled independently from N (0, ? 2 ). Then with high probability the quantity
?
??? ? the ??noise
???X (?)???
is concentrated around its mean, which can be bounded as follows:
mean
???
???
E???X? (?)???
mean
?
K
?
p
? M X ??
?
nk + n
? \k .
K
k=1
Next the following lemma, which is a generalization of a result presented in Negahban and Wainwright [17, Proposition 1], provides a ground for the restricted strong convexity assumption (8).
Lemma 5. Let X : Rn1 ?????nK ? RM be a random Gaussian design. Then it satis?es
?r
!
r
K
n
? \k ?????? ??????
1 X
?X(?)?2
1 ?????? ??????
nk
?
? F?
+
? S1 ,
?
4
K
M
M
M
k=1
with probability at least 1 ? 2 exp(?N/32).
Proof. The proof is analogous to that of Proposition 1 in [17] except that we use H?older-like inequality (3) for tensors instead of inequality (2) for matrices.
Finally, we obtain the following convergence bound.
Theorem 3. Under the random Gaussian design setup, there are universal constants c0 , c1 , and c2
PK ?
PK ? 2
p
1
1
such that for a sample size M ? c1 ( K
n
? \k ))2 ( K
rk ) , any solution of the
k=1 ( nk +
?
Pk=1
p
?
K
minimization problem (7) with regularization constant ?M = c0 ? k=1 ( nk + n
? \k )/(K M )
satis?es the following bound:
PK ?
PK ? 2
p
1
1
???
???
?2 ( K
n
? \k ))2 ( K
k=1 ( nk +
k=1 rk )
? ? W ? ???2 ? c2
???W
,
F
M
with high probability.
Again we can simplify the result of Theorem 3 as follows: for sample size M ? c1 N r? we have
?
?
?1
???
???
? ? W ? ???2 ? Op ? 2 N ?n ?1/2 ?r?1/2 ,
???W
(14)
F
M
where r? = ?n?1 ?1/2 ?r?1/2 is the normalized rank. Note that the condition on the number of
samples M does not depend on the noise variance ? 2 . Therefore in the limit ? 2 ? 0, the bound (14)
is suf?ciently small but only valid for sample size M that exceeds c1 N r?, which implies a threshold
behavior as in Figure 1.
Note also that in the matrix case (K = 2), r1 = r2 = r and N ?n?1 ?1/2 = O(n1 + n2 ). Therefore
? ? W ? ?2 ?
we can restate the above result as for sample size M ? c1 r(n1 + n2 ), we have ?W
F
Op (r(n1 + n2 )/M ), which is compatible with the result in [17, 18].
4
Experiments
In this section, we conduct two numerical experiments to con?rm our analysis in Section 3.2 and
Section 3.3.
6
?4
3
x 10
0.03
size=[50 50 20] ?M=0.03/N
size=[50 50 20] ?M=0.33/N
size=[50 50 20] ?M=2.34/N
size=[50 50 20] ? =0.54/N
0.025
M
size=[100 100 50] ?M=0.66/N
size=[100 100 50] ?M=0.69/N
Mean squared error
Mean squared error
size=[50 50 20] ? =6/N
M
size=[100 100 50] ?M=0.06/N
2
size=[50 50 20] ?M=0.33/N
size=[100 100 50] ? =1.11/N
M
1
0.02
size=[100 100 50] ? =4.5/N
M
size=[100 100 50] ?M=12/N
0.015
0.01
0.005
0
0
0.2
0.4
0.6
Normalized rank
0.8
0
0
1
(a) Small noise (? = 0.01).
0.2
0.4
0.6
Normalized rank
0.8
1
(b) Large noise (? = 0.1).
Figure 2: Result of noisy tensor decomposition for tensors of size 50 ? 50 ? 20 and 100 ? 100 ? 50.
4.1
Noisy Tensor Decomposition
We randomly generated low-rank tensors of dimensions n(1) = (50, 50, 20) and n(2) =
(100, 100, 50) for various ranks (r1 , . . . , rK ). For a speci?c rank, we generated the true tensor
by drawing elements of the r1 ? ? ? ? ? rK ?core tensor? from the standard normal distribution and
multiplying its each mode by an orthonormal factor randomly drawn from the Haar measure. As
described in Section 3.2, the observation y consists of all the elements of the original tensor once
(M = N ) with additive independent Gaussian noise with variance ? 2 . We used the alternating
direction method of multipliers (ADMM) for ?constraint? approaches described in [23, 10] to solve
the minimization problem (7). The whole experiment was repeated 10 times and averaged.
???
???
? ? W ? ???2 /N is plotted against
The results are shown in Figure 2. The mean squared error ???W
F
the normalized rank r? = ?n?1 ?1/2 ?r?1/2 (of the true tensor) de?ned in Equation (13). Since the
choice of the regularization constant ?M only depends on the size of the tensor and not on the ranks
of the underlying tensor in Theorem 2, we ?x the regularization constant to some different values
and report the dependency of the estimation error on the normalized rank r? of the true tensor.
Figure 2(a) shows the result for small noise (? = 0.01) and Figure 2(b) shows the result for large
???
???
? ? W ? ???2 grows linearly
noise (? = 0.1). As predicted by Theorem 2, the squared error ???W
F
against the normalized rank r?. This behaviour is consistently observed not only around the preferred
regularization constant value (triangles) but also in the over-?tting case (circles) and the under?tting case (crosses). Moreover, as predicted by Theorem 2, the preferred regularization constant
value scales linearly and the squared error scales quadratically to the noise standard deviation ?.
As predicted by Lemma 3, the curves for the smaller 50 ? 50 ? 20 tensor and those for the larger
100 ? 100 ? 50 tensor seem to agree when the regularization constant
is scaled by the factor two.
p
Note that the dominant term in inequality (12) is the second term n
? \k , which is roughly scaled by
the factor two from 50 ? 50 ? 20 to 100 ? 100 ? 50.
4.2
Tensor completion from partial observations
In this subsection, we repeat the simulation originally done by Tomioka et al. [23] and demonstrate
that our results in Section 3.3 can precisely predict the empirical scaling behaviour with respect to
both the size and rank of a tensor.
We present results for both matrix completion (K = 2) and tensor completion (K = 3). For
the matrix case, we randomly generated low-rank matrices of dimensions 50 ? 20, 100 ? 40, and
250 ? 200. For the tensor case, we randomly generated low-rank tensors of dimensions 50 ? 50 ? 20
and 100 ? 100 ? 50. We generated the matrices or tensors as in the previous subsection for various
ranks. We randomly selected some elements of the true matrix/tensor for training and kept the
7
1
0.8
0.8
0.6
0.4
size=[50 20]
size=[100 40]
size=[250 200]
0.2
0
0
0.1
0.2
0.3
0.4
0.5
Normalized rank ||n?1|| ||r||
1/2
Fraction at Error<=0.01
Fraction at err<=0.01
1
0.6
0.4
0.2
0
0
0.6
1/2
(a) Matrix completion (K = 2).
size=[50 50 20]
size=[100 100 50]
0.2
0.4
0.6
Normalized rank ||n?1||1/2||r||1/2
0.8
(b) Tensor completion (K = 3).
Figure 3: Scaling behaviour of matrix/tensor completion with respect to the size n and the rank r.
remaining elements for testing. No observation noise is added. We used the ADMM for ?as a
matrix? and ?constraint? approaches described in [23] to solve the minimization problem (7) for
matrix completion and tensor completion, respectively. Since there is no observation noise, we
chose the regularization constant ? ? 0. A single experiment for a speci?c size and rank can be
visualized as in Figure 1.
?In
?? Figure ?3,
??? we plot the minimum fraction of observations m = M/N that achieved error
? ? W ??? smaller than 0.01 against the normalized rank r? = ?n?1 ?1/2 ?r?1/2 (of the true ten???W
F
sor) de?ned in Equation (13). The matrix case is plotted in Figure 3(a) and the tensor case is plotted
in Figure 3(b). Each series (blue crosses or red circles) corresponds to different matrix/tensor size
and each data-point corresponds to a different core size (rank). We can see that the fraction of observations m = M/N scales linearly against the normalized rank r?, which agrees with the condition
M/N ? c1 ?n?1 ?1/2 ?r?1/2 = c1 r? in Theorem 3 (see Equation (14)). The agreement is especially
good for tensor completion (Figure 3(b)), where the two series almost overlap. Interestingly, we
can see that when compared at the same normalized rank, tensor completion is easier than matrix
completion. For example, when nk = 50 and rk = 10 for each k = 1, . . . , K, the normalized rank
is 0.2. From Figure 3, we can see that we only need to see 30% of the entries in the tensor case to
achieve error smaller than 0.01, whereas we need about 60% of the entries in the matrix case.
5
Conclusion
We have analyzed the statistical performance of a tensor decomposition algorithm based on the
overlapped Schatten 1-norm regularization (7). Numerical experiments show that our theory can
predict the empirical scaling behaviour well. The fraction of observation m = M/N at the threshold
predicted by our theory is proportional to the quantity we call the normalized rank, which re?nes
conjecture (sum of the mode-k ranks) in [23].
There are numerous directions that the current study can be extended. In this paper, we have focused
on the convergence of the estimation error; it would be meaningful to also analyze the condition for
the consistency of the estimated rank as in [2]. Second, although we have succeeded in predicting
the empirical scaling behaviour, the setting of random Gaussian design does not match the tensor
completion setting in Section 4.2. In order to analyze the latter setting, the notion of incoherence in
[5] or spikiness in [16] might be useful. This might also explain why tensor completion is easier than
matrix completion at the same normalized rank. Moreover, when the target tensor is only low-rank
in a certain mode, Schatten 1-norm regularization fails badly (as predicted by the high normalized
rank). It would be desirable to analyze the ?Mixture? approach that aims at this case [23]. In
a broader context, we believe that the current paper could serve as a basis for re-examining the
concept of tensor rank and low-rank approximation of tensors based on convex optimization.
Acknowledgments. We would like to thank Franz Kir?aly and Hiroshi Kajino for their valuable
comments and discussions. This work was supported in part by MEXT KAKENHI 22700138,
23240019, 23120004, 22700289, and NTT Communication Science Laboratories.
8
References
[1] E. Acar and B. Yener. Unsupervised multiway data analysis: A literature survey. IEEE T. Knowl. Data.
En., 21(1):6?20, 2009.
[2] F.R. Bach. Consistency of trace norm minimization. J. Mach. Learn. Res., 9:1019?1048, 2008.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[4] R. Bro. PARAFAC. Tutorial and applications. Chemometr. Intell. Lab., 38(2):149?171, 1997.
[5] E. J. Candes and B. Recht. Exact matrix completion via convex optimization. Found. Comput. Math.,
9(6):717?772, 2009.
[6] J.D. Carroll and J.J. Chang. Analysis of individual differences in multidimensional scaling via an n-way
generalization of ?Eckart-Young? decomposition. Psychometrika, 35(3):283?319, 1970.
[7] P. Comon. Tensor decompositions. In J. G. McWhirter and I. K. Proudler, editors, Mathematics in signal
processing V. Oxford University Press, 2002.
[8] L. De Lathauwer and J. Vandewalle. Dimensionality reduction in higher-order signal processing and
rank-(r1 , r2 , . . . , rn ) reduction in multilinear algebra. Linear Algebra Appl., 391:31?55, 2004.
[9] K. Fukumizu. Generalization error of linear neural networks in unidenti?able cases. In Algorithmic
Learning Theory, pages 51?62. Springer, 1999.
[10] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27:025010, 2011.
[11] J. H?astad. Tensor rank is NP-complete. Journal of Algorithms, 11(4):644?654, 1990.
[12] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455?500,
2009.
[13] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data.
In Prof. ICCV, 2009.
[14] M. M?rup. Applications of tensor (multiway array) factorizations and decompositions in data mining.
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1):24?40, 2011.
[15] S. Negahban, P. Ravikumar, M. Wainwright, and B. Yu. A uni?ed framework for high-dimensional
analysis of m-estimators with decomposable regularizers. In Y. Bengio, D. Schuurmans, J. Lafferty,
C. K. I. Williams, and A. Culotta, editors, Advances in NIPS 22, pages 1348?1356. 2009.
[16] S. Negahban and M.J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal
bounds with noise. Technical report, arXiv:1009.2118, 2010.
[17] S. Negahban and M.J. Wainwright. Estimation of (near) low-rank matrices with noise and highdimensional scaling. Ann. Statist., 39(2), 2011.
[18] B. Recht, M. Fazel, and P.A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. SIAM Review, 52(3):471?501, 2010.
[19] A. Rohde and A.B. Tsybakov.
39(2):887?930, 2011.
Estimation of high-dimensional low-rank matrices.
Ann. Statist.,
[20] N.D. Sidiropoulos, R. Bro, and G.B. Giannakis. Parallel factor analysis in sensor array processing. IEEE
T. Signal Proces., 48(8):2377?2388, 2000.
[21] M. Signoretto, L. De Lathauwer, and J.A.K. Suykens. Nuclear norms for tensors and their use for convex
multilinear estimation. Technical Report 10-186, ESAT-SISTA, K.U.Leuven, 2010.
[22] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In Lawrence K.
Saul, Yair Weiss, and L?eon Bottou, editors, Advances in NIPS 17, pages 1329?1336. MIT Press, Cambridge, MA, 2005.
[23] R. Tomioka, K. Hayashi, and H. Kashima. Estimation of low-rank tensors via convex optimization.
Technical report, arXiv:1010.0789, 2011.
[24] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279?311,
1966.
[25] M. Vasilescu and D. Terzopoulos. Multilinear analysis of image ensembles: Tensorfaces. Computer
Vision?ECCV 2002, pages 447?460, 2002.
[26] H. Wang and N. Ahuja. Facial expression decomposition. In Proc. 9th ICCV, pages 958 ? 965, 2003.
9
| 4453 |@word polynomial:1 norm:30 c0:5 simulation:1 decomposition:30 reduction:2 liu:1 series:3 interestingly:1 err:1 current:3 additive:1 numerical:3 acar:1 plot:2 drop:1 selected:1 accordingly:1 xk:1 core:2 yamada:1 provides:1 math:1 preference:2 mathematical:2 lathauwer:2 c2:2 become:1 prove:1 consists:1 introduce:2 behavior:1 roughly:2 multi:2 electroencephalography:1 psychometrika:2 begin:1 estimating:1 notation:3 moreover:5 underlying:3 bounded:4 what:1 guarantee:1 every:2 multidimensional:1 rohde:1 rm:6 scaled:2 positive:1 limit:1 despite:1 mach:1 oxford:1 establishing:1 subscript:1 incoherence:1 might:3 chose:1 suggests:1 challenging:1 appl:1 dif:1 factorization:2 graduate:1 averaged:1 fazel:1 acknowledgment:1 testing:1 practice:1 xr:1 procedure:1 universal:2 empirical:5 kohei:2 boyd:1 refers:2 operator:4 context:2 risk:1 conventional:2 deterministic:2 map:1 missing:1 williams:1 independently:1 convex:19 survey:2 focused:2 decomposable:1 recovery:1 estimator:2 array:3 nuclear:3 orthonormal:1 vandenberghe:1 notion:4 analogous:2 kolda:2 tting:2 play:1 suppose:4 user:1 exact:3 target:1 us:1 agreement:1 overlapped:7 element:11 taiji:2 observed:6 role:1 wang:1 eckart:1 culotta:1 valuable:1 balanced:1 convexity:10 rup:1 rigorously:1 depend:1 solving:1 algebra:2 serve:1 max1:1 basis:1 triangle:1 various:4 represented:2 regularizer:1 hiroshi:1 whose:1 larger:1 solve:2 psychometrics:1 say:1 particularity:1 lder:1 triangular:1 drawing:1 bro:2 rennie:1 jointly:1 noisy:5 product:3 combining:3 culty:1 achieve:1 adjoint:1 intuitive:1 frobenius:1 chemometrics:1 rst:5 convergence:4 r1:7 perfect:1 ac:3 stat:1 completion:18 bers:1 op:3 school:1 astad:1 strong:11 recovering:1 predicted:5 come:1 implies:1 direction:2 restate:1 tokyo:6 correct:1 bader:2 jst:1 require:1 behaviour:7 sor:1 generalization:6 proposition:2 tighter:1 multilinear:3 mathematically:1 extension:1 hold:3 around:3 ground:1 normal:1 exp:1 mcwhirter:1 lawrence:1 algorithmic:1 predict:5 claim:2 substituting:1 estimation:14 proc:1 knowl:1 largest:1 agrees:1 repetition:1 weighted:1 minimization:11 unfolding:6 fukumizu:1 mit:1 sensor:1 gaussian:13 aim:1 ck:2 sion:1 broader:1 jaakkola:1 parafac:1 focus:1 kakenhi:1 consistently:1 rank:70 squaring:1 gandy:1 typically:1 relation:1 dual:7 denoted:2 special:1 equal:1 once:2 having:2 identical:1 yu:1 unsupervised:1 discrepancy:1 np:2 report:4 simplify:2 employ:1 oriented:1 randomly:6 intell:1 individual:1 phase:1 n1:6 interest:1 satis:5 mining:3 analyzed:1 mixture:1 regularizers:1 succeeded:1 partial:2 facial:1 orthogonal:2 conduct:1 penalizes:1 circle:2 plotted:4 re:3 instance:1 column:2 maximization:1 introducing:1 deviation:1 decomposability:6 entry:2 examining:1 vandewalle:1 characterize:1 kn:1 dependency:1 recht:3 negahban:6 siam:2 interdisciplinary:1 informatics:1 synthesis:1 ym:1 again:1 squared:6 central:1 recorded:1 rn1:11 yener:1 japan:3 parrilo:1 de:13 hisashi:1 depends:1 lab:1 analyze:7 sup:1 red:1 xing:1 recover:3 parallel:1 candes:1 collaborative:1 minimize:1 ltering:1 square:1 qk:1 variance:3 ensemble:1 generalize:2 multiplying:1 explain:1 coef:1 ed:1 vasilescu:1 against:5 proces:1 tucker:5 naturally:3 proof:7 boil:1 con:1 sampled:1 popular:2 knowledge:3 subsection:4 dimensionality:1 musialski:1 actually:1 originally:1 higher:1 specify:1 wei:1 formulation:1 done:1 furthermore:2 hand:4 mode:23 gray:1 believe:1 grows:1 ye:1 normalized:17 true:11 concept:1 multiplier:1 regularization:17 equality:1 alternating:2 laboratory:1 mist:2 complete:2 demonstrate:1 image:1 recently:3 common:1 jp:4 discussed:1 sidiropoulos:1 measurement:1 cambridge:2 vec:3 leuven:1 consistency:2 pm:1 mathematics:1 pointed:1 multiway:2 carroll:1 dominant:1 recent:3 showed:1 inf:1 certain:3 inequality:15 success:1 yi:1 minimum:2 speci:3 signal:4 dashed:1 ii:2 multiple:1 sound:1 desirable:1 exceeds:1 ntt:1 match:1 characterized:2 technical:3 cross:2 bach:1 nara:2 ravikumar:1 basic:1 regression:1 vision:2 arxiv:2 represent:1 achieved:1 suykens:1 c1:9 addition:5 whereas:2 spikiness:1 singular:1 modality:1 envelope:1 comment:1 subject:1 lafferty:1 seem:1 call:5 ciently:2 near:1 noting:1 revealed:1 bengio:1 variate:1 hindered:1 inner:2 motivated:1 expression:1 dramatically:1 useful:2 clear:1 rankk:2 tsybakov:1 ten:1 statist:2 concentrated:2 visualized:1 sista:1 tutorial:1 estimated:1 blue:1 tensorfaces:1 key:1 threshold:2 drawn:1 rrk:1 ce:1 kept:1 fraction:7 sum:1 inverse:1 extends:1 almost:1 scaling:8 rnk:2 bound:10 guaranteed:1 elucidated:1 badly:1 precisely:3 constraint:2 conjecture:1 ned:9 department:1 according:2 smaller:4 giannakis:1 increasingly:1 s1:28 comon:1 restricted:9 iccv:2 equation:7 agree:1 turn:1 presto:1 spectral:2 kashima:3 yair:1 original:1 remaining:2 eon:1 k1:1 especially:1 prof:1 society:1 tensor:96 objective:1 question:1 quantity:6 added:1 diagonal:1 kth:1 thank:1 schatten:15 besides:1 index:2 setup:1 ryota:1 trace:2 stated:1 wonka:1 design:9 kir:1 unknown:2 upper:3 observation:10 extended:1 communication:1 y1:1 discovered:1 rn:2 arbitrary:1 sharp:1 aly:1 complement:1 quadratically:1 naist:1 nip:2 esat:1 able:1 bar:1 xm:1 program:1 max:2 wainwright:4 overlap:1 natural:1 regularized:4 haar:1 predicting:1 residual:2 older:2 technology:1 ne:3 numerous:1 ready:1 conventionally:2 review:3 literature:1 discovery:1 fully:1 suf:3 proportional:1 proven:1 analogy:1 ingredient:2 srebro:1 editor:3 row:1 eccv:1 compatible:1 surprisingly:1 last:1 repeat:1 supported:1 jth:1 side:4 weaker:2 terzopoulos:1 institute:1 saul:1 tolerance:2 distributed:1 curve:1 dimension:5 transition:2 valid:2 author:1 suzuki:1 collection:1 franz:1 uni:1 preferred:2 xi:3 vectorization:5 why:1 learn:1 eeg:1 schuurmans:1 bottou:1 pk:16 main:1 linearly:4 whole:1 noise:16 n2:3 repeated:1 x1:1 cient:1 en:1 ahuja:1 wiley:1 tomioka:4 fails:1 wish:1 concatenating:1 comput:1 lie:1 young:1 down:1 rk:20 theorem:11 jensen:1 r2:2 margin:1 nk:30 easier:2 simply:2 visual:1 signoretto:1 scalar:1 hayashi:2 chang:1 springer:1 corresponds:2 ma:1 goal:3 formulated:1 ann:2 admm:2 change:1 except:1 justify:1 lemma:14 called:1 e:5 meaningful:1 highdimensional:1 mext:1 latter:1 mum:1 |
3,815 | 4,454 | High-dimensional regression with noisy and missing data:
Provable guarantees with non-convexity
Martin J. Wainwright
Departments of Statistics and EECS
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Po-Ling Loh
Department of Statistics
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Abstract
Although the standard formulations of prediction problems involve fully-observed and noiseless data drawn in an i.i.d. manner, many applications involve noisy and/or missing data,
possibly involving dependencies. We study these issues in the context of high-dimensional
sparse linear regression, and propose novel estimators for the cases of noisy, missing, and/or
dependent data. Many standard approaches to noisy or missing data, such as those using the
EM algorithm, lead to optimization problems that are inherently non-convex, and it is difficult
to establish theoretical guarantees on practical algorithms. While our approach also involves
optimizing non-convex programs, we are able to both analyze the statistical error associated
with any global optimum, and prove that a simple projected gradient descent algorithm will
converge in polynomial time to a small neighborhood of the set of global minimizers. On
the statistical side, we provide non-asymptotic bounds that hold with high probability for the
cases of noisy, missing, and/or dependent data. On the computational side, we prove that
under the same types of conditions required for statistical consistency, the projected gradient
descent algorithm will converge at geometric rates to a near-global minimizer. We illustrate
these theoretical predictions with simulations, showing agreement with the predicted scalings.
1
Introduction
In standard formulations of prediction problems, it is assumed that the covariates are fully-observed and sampled independently from some underlying distribution. However, these assumptions are not realistic for many
applications, in which covariates may be observed only partially, observed with corruption, or exhibit dependencies. Consider the problem of modeling the voting behavior of politicians: in this setting, votes may be missing
due to abstentions, and temporally dependent due to collusion or ?tit-for-tat? behavior. Similarly, surveys often
suffer from the missing data problem, since users fail to respond to all questions. Sensor network data also tends
to be both noisy due to measurement error, and partially missing due to failures or drop-outs of sensors.
There are a variety of methods for dealing with noisy and/or missing data, including various heuristic methods, as well as likelihood-based methods involving the expectation-maximization (EM) algorithm (e.g., see the
book [1] and references therein). A challenge in this context is the possible non-convexity of associated optimization problems. For instance, in applications of EM, problems in which the negative likelihood is a convex
function often become non-convex with missing or noisy data. Consequently, although the EM algorithm will
converge to a local minimum, it is difficult to guarantee that the local optimum is close to a global minimum.
In this paper, we study these issues in the context of high-dimensional sparse linear regression, in the case
when the predictors or covariates are noisy, missing, and/or dependent. Our main contribution is to develop and
study some simple methods for handling these issues, and to prove theoretical results about both the associated
statistical error and the optimization error. Like EM-based approaches, our estimators are based on solving
optimization problems that may be non-convex; however, despite this non-convexity, we are still able to prove
that a simple form of projected gradient descent will produce an output that is ?sufficiently close??meaning as
1
small as the statistical error?to any global optimum. As a second result, we bound the size of this statistical
error, showing that it has the same scaling as the minimax rates for the classical cases of perfectly observed and
independently sampled covariates. In this way, we obtain estimators for noisy, missing, and/or dependent data
with guarantees similar to the usual fully-observed and independent case. The resulting estimators allow us to
solve the problem of high-dimensional Gaussian graphical model selection with missing data.
There is a large body of work on the problem of corrupted covariates or errors-in-variables for regression
problems (see the papers and books [2, 3, 4, 5] and references therein). Much of the earlier theoretical work
is classical in nature, where the sample size n diverges with the dimension p held fixed. Most relevant to this
paper is more recent work that has examined issues of corrupted and/or missing data in the context of highdimensional sparse linear models, allowing for n p. St?adler and B?uhlmann [6] developed an EM-based
method for sparse inverse covariance matrix estimation in the missing data regime, and used this result to
derive an algorithm for sparse linear regression with missing data. As mentioned above, however, it is difficult
to guarantee that EM will converge to a point close to a global optimum of the likelihood, in contrast to the
methods studied here. Rosenbaum and Tsybakov [7] studied the sparse linear model when the covariates are
corrupted by noise, and proposed a modified form of the Dantzig selector, involving a convex program. This
convexity produces a computationally attractive method, but the statistical error bounds that they establish scale
proportionally with the size of the additive perturbation, hence are often weaker than the bounds that can be
proved using our methods.
The remainder of this paper is organized as follows. We begin in Section 2 with background and a precise
description of the problem. We then introduce the class of estimators we will consider and the form of the
projected gradient descent algorithm. Section 3 is devoted to a description of our main results, including a pair
of general theorems on the statistical and optimization error, and then a series of corollaries applying our results
to the cases of noisy, missing, and dependent data. In Section 4, we demonstrate simulations to confirm that our
methods work in practice. For detailed proofs, we refer the reader to the technical report [8].
Notation. For a matrix M , we write kM kmax := maxi,j |mij | to be the elementwise `? -norm of M . Furthermore, |||M |||1 denotes the induced `1 -operator norm (maximum absolute column sum) of M , and |||M |||op is the
(M )
, the condition number of M .
induced `2 -operator norm (spectral norm) of M . We write ?(M ) := ??max
min (M )
2
Background and problem set-up
In this section, we provide a formal description of the problem and motivate the class of estimators studied in
the paper. We then describe a class of projected gradient descent algorithms to be used in the sequel.
2.1
Observation model and high-dimensional framework
Suppose we observe a response variable yi ? R that is linked to a covariate vector xi ? Rp via the linear model
yi = hxi , ? ? i + i ,
?
for i = 1, 2, . . . , n.
(1)
p
Here, the regression vector ? ? R is unknown, and i ? R is observation noise, independent of xi . Rather than
directly observing each xi ? Rp , we observe a vector zi ? Rp linked to xi via some conditional distribution:
zi ? Q(? | xi ),
for i = 1, 2, . . . , n.
(2)
This setup allows us to model various types of disturbances to the covariates, including
(a) Additive noise: We observe zi = xi + wi , where wi ? Rp is a random vector independent of xi , say
zero-mean with known covariance matrix ?w .
(b) Missing data: For a fraction ? ? [0, 1), we observe a random vector zi ? Rp such that independently
for each component j, we observe zij = xij with probability 1 ? ?, and zij = ? with probability ?.
This model can also be generalized to allow for different missing probabilities for each covariate.
Our first set of results is deterministic, depending on specific instantiations of the observed variables
{(yi , zi )}ni=1 . However, we are also interested in proving results that hold with high probability when the
xi ?s and zi ?s are drawn at random from some distribution. We develop results for both the i.i.d. setting and the
case of dependent covariates, where the xi ?s are generated according to a stationary vector autoregressive (VAR)
process. Furthermore, we work within a high-dimensional framework where n p, and assume ? ? has at most
k non-zero parameters, where the sparsity k is also allowed to increase to infinity with the sample size n. We
assume the scaling k? ? k2 = O(1), which is reasonable in order to have a non-diverging signal-to-noise ratio.
2
2.2
M -estimators for noisy and missing covariates
We begin by examining a simple deterministic problem. Let Cov(X) = ?x 0, and consider the program
1 T
? ?x ? ? h?x ? ? , ?i .
(3)
?b ? arg min
k?k1 ?R 2
As long as the constraint radius R is at least k? ? k1 , the unique solution to this convex program is ?b = ? ? . This
idealization suggests various estimators based on the plug-in principle. We form unbiased estimates of ?x and
b and ?
?x ? ? , denoted by ?
b, respectively, and consider the modified program and its regularized version:
1 T
b ? hb
?b ? arg min
? , ?i ,
(4)
? ??
k?k1 ?R 2
1 T
b ? hb
? , ?i + ?n k?k1 ,
(5)
?b ? arg minp
? ??
??R
2
where ?n > 0 is the regularization parameter. The Lasso [9, 10] is a special case of these programs, where
1
b Las := 1 X T X and ?
bLas := X T y,
(6)
?
n
n
and we have introduced the shorthand y = (y1 , . . . , yn )T ? Rn , and X ? Rn?p , with xTi as its ith row. In
this paper, we focus on more general instantiations of the programs (4) and (5), involving different choices of
b ?
b Las is positive
the pair (?,
b) that are adapted to the cases of noisy and/or missing data. Note that the matrix ?
semidefinite, so the Lasso program is convex. In sharp contrast, for the cases of noisy or missing data, the most
b is not positive semidefinite, hence the loss functions appearing in the problems (4)
natural choice of the matrix ?
and (5) are non-convex. It is generally impossible to provide a polynomial-time algorithm that converges to a
(near) global optimum of a non-convex problem. Remarkably, we prove that a simple projected gradient descent
algorithm still converges with high probability to a vector close to any global optimum in our setting.
Let us illustrate these ideas with some examples:
Example 1 (Additive noise). Suppose we observe the n ? p matrix Z = X + W , where W is a random
matrix independent of X, with rows wi drawn i.i.d. from a zero-mean distribution with known covariance ?w .
Consider the pair
1
b add := 1 Z T Z ? ?w and ?
?
badd := Z T y,
(7)
n
n
?
which correspond to unbiased estimators of ?x and ?x ? , respectively. Note that when ?w = 0 (corresponding
b add is
to the noiseless case), the estimators reduce to the standard Lasso. However, when ?w 6= 0, the matrix ?
not positive semidefinite in the high-dimensional regime (n p) of interest. Indeed, since the matrix n1 Z T Z
b add to have a large number of negative eigenvalues.
has rank at most n, the subtracted matrix ?w may cause ?
Example 2 (Missing data). Suppose each entry of X is missing independently with probability ? ? [0, 1), and
we observe the matrix Z ? Rn?p with entries
Xij with probability 1 ? ?,
Zij =
0
otherwise.
Given the observed matrix Z ? Rn?p , consider an estimator of the general form (4), based on the choices
eT e
eT e
1 eT
b mis := Z Z ? ? diag Z Z
?
and ?
bmis := Z
y,
(8)
n
n
n
eij = Zij /(1 ? ?). It is easy to see that the pair (?
b mis , ?
b Las , ?
where Z
bmis ) reduces to the pair (?
bLas ) for the standard
Lasso when ? = 0, corresponding to no missing data. In the more interesting case when ? ? (0, 1), the matrix
eT Z
e
Z
b
n in equation (8) has rank at most n, so the subtracted diagonal matrix may cause the matrix ?mis to have a
large number of negative eigenvalues when n p, and the associated quadratic function is not convex.
2.3
Restricted eigenvalue conditions
b there are various ways to assess its closeness to ? ? . We focus on the `2 -norm k?b ?? ? k2 , as
Given an estimate ?,
well as the closely related `1 -norm k?b ? ? ? k1 . When the covariate matrix X is fully observed (so that the Lasso
b Las = 1 X T X
can be applied), it is well understood that a sufficient condition for `2 -recovery is that the matrix ?
n
satisfy a restricted eigenvalue (RE) condition (e.g., [11, 12, 13]). In this paper, we use the following condition:
3
b satisfies a lower restricted eigenvalue condition with curvaDefinition 1 (Lower-RE condition). The matrix ?
ture ?` > 0 and tolerance ?` (n, p) > 0 if
b ? ?` k?k22 ? ?` (n, p)k?k21
?T ??
for all ? ? Rp .
(9)
b Las = 1 X T X satisfies this RE condition (9), the Lasso estimate
It can be shown that when the Lasso matrix ?
n
1
has low `2 -error for any vector ? ? supported on any subset of size at most k . ?` (n,p)
. Moreover, it is known
b Las will satisfy such an RE condition
that for various random choices of the design matrix X, the Lasso matrix ?
with high probability (e.g., [14]). We also make use of the analogous upper restricted eigenvalue condition:
b satisfies an upper restricted eigenvalue condition with
Definition 2 (Upper-RE condition). The matrix ?
smoothness ?u > 0 and tolerance ?u (n, p) > 0 if
b ? ?u k?k22 + ?u (n, p)k?k21
?T ??
for all ? ? Rp .
(10)
In recent work on high-dimensional projected gradient descent, Agarwal et al. [15] use a more general form of
bounds (9) and (10), called the restricted strong convexity (RSC) and restricted smoothness (RSM) conditions.
2.4
Projected gradient descent
In addition to proving results about the global minima of programs (4) and (5), we are also interested in
polynomial-time procedures for approximating such optima. We show that the simple projected gradient descent
algorithm can be used to solve the program (4). The algorithm generates a sequence of iterates ? t according to
1 b t
? t+1 = ? ? t ? (??
??
b) ,
(11)
?
where ? > 0 is a stepsize parameter, and ? denotes the `2 -projection onto the `1 -ball of radius R. This
projection can be computed rapidly in O(p) time, for instance using a procedure due to Duchi et al. [16]. Our
analysis shows that under a reasonable set of conditions, the iterates for the family of programs (4) converges to
a point extremely close to any global optimum in both `1 -norm and `2 -norm, even for the non-convex program.
3
Main results and consequences
We provide theoretical guarantees for both the constrained estimator (4) and the regularized variant
1 T
b ? hb
?b ? arg min ?
? ??
? , ?i + ?n k?k1 ,
2
k?k1 ?b0 k
(12)
for a constant b0 ? k? ? k2 , which is a hybrid between the constrained (4) and regularized (5) programs.
3.1
Statistical error
b satisfies a lower-RE condition with curvature
In controlling the statistical error, we assume that the matrix ?
b and vector ?
?` and tolerance ?` (n, p), as previously defined (9). In addition, recall that the matrix ?
b serve
as surrogates to the deterministic quantities ?x ? Rp?p and ?x ? ? ? Rp , respectively. We assume there is a
function ?(Q, ? ), depending on the standard deviation ? of the observation noise vector from equation (1)
and the conditional distribution Q from equation (2), such that the following deviation conditions are satisfied:
r
r
log p
b ? ?x )? ? k? ? ?(Q, ? ) log p .
kb
? ? ?x ? ? k? ? ?(Q, ? )
and k(?
(13)
n
n
q
The following result applies to any global optimum ?b of the program (12) with ?n ? 4 ?(Q, ? ) logn p :
b ?
Theorem 1 (Statistical error). Suppose the surrogates (?,
b) satisfy the deviation bounds (13), and the matrix
b
? satisfies the lower-RE condition (9) with parameters (?` , ?` ) such that
r
?
?`
?(Q, ? ) log p
? ,
k ?` (n, p) ? min
.
(14)
2 b0
n
128 k
4
Then for any vector ? ? with sparsity at most k, there is a universal positive constant c0 such that any global
optimum ?b satisfies the bounds
r
?
k
log p
c
0
?
max ?(Q, ? )
, ?n , and
(15a)
k?b ? ? k2 ?
?`
n
r
log p
8 c0 k
?
b
max ?(Q, ? )
, ?n .
(15b)
k? ? ? k1 ?
?`
n
The same bounds (without ?n ) also apply to the constrained program (4) with radius choice R = k? ? k1 .
b Las , ?
Remarks: Note that for the standard Lasso pair (?
bLas ), bounds of the form (15) for sub-Gaussian noise
are well-known from past work (e.g., [12, 17, 18, 19]). The novelty of Theorem 1 is in allowing for general
pairs of such surrogates, which can lead to non-convexity in the underlying M -estimator.
3.2
Optimization error
Although Theorem 1 provides guarantees that hold uniformly for any choice of global minimizer, it does not
provide any guidance on how to approximate such a global minimizer using a polynomial-time algorithm.
b satNonetheless, we are able to show that for the family of programs (4), under reasonable conditions on ?
isfied in various settings, a simple projected gradient method will converge geometrically fast to a very good
approximation of any global optimum.
Theorem 2 (Optimization error). Consider the program (4) with any choice of radius R for which the constraint
b satisfies the lower-RE (9) and upper-RE (10) conditions with
is active. Suppose that the surrogate matrix ?
log p
?u , ?l n , and that we apply projected gradient descent (11) with constant stepsize ? = 2?u . Then as
long as n % k log p, there is a contraction coefficient ? ? (0, 1) independent of (n, p, k) and universal positive
b the gradient descent iterates {? t }? satisfy the bound
constants (c1 , c2 ) such that for any global optimum ?,
t=0
log
p
b 2 ? ? t k? 0 ? ?k
b 2 + c1
k? t ? ?k
k?b ? ? ? k21 + c2 k?b ? ? ? k22
for all t = 0, 1, 2, . . ..
(16)
2
2
n
In addition, we have the `1 -bound
?
?
b 1 ? 2 k k? t ? ?k
b 2 + 2 k k?b ? ? ? k2 + 2 k?b ? ? ? k1 for all t = 0, 1, 2, . . ..
k? t ? ?k
(17)
Note that the bound (16) controls the `2 -distance between the iterate ? t at time t, which is easily computed
in polynomial-time, and any global optimum ?b of the program (4), which may be difficult to compute. Since
? ? (0, 1), the first term in the bound vanishes as t increases. Together with Theorem
q 1, equations (16) and (17)
k log p
imply that the `2 - and `1 -optimization error are bounded as O( n ) and O k logn p , respectively.
3.3
Some consequences
Both Theorems 1 and 2 are deterministic results; applying them to specific models requires additional work to
establish the stated conditions. We turn to the statements of some consequences of these theorems for different
cases of noisy, missing, and dependent data. A zero-mean random variable Z is sub-Gaussian with parameter
? > 0 if E(e?Z ) ? exp(?2 ? 2 /2) for all ? ? R. We say that a random matrix X ? Rn?p is sub-Gaussian
with parameters (?, ? 2 ) if each row xTi ? Rp is sampled independently from a zero-mean distribution with
covariance ?, and for any unit vector u ? Rp , the random variable uT xi is sub-Gaussian with parameter at
most ?.
We begin with the case of i.i.d. samples with additive noise, as described in Example 1.
Corollary 1. Suppose we observe Z = X + W , where the random matrices X, W ? Rn?p are sub2
Gaussian with parameters (?x , ?x2 ) and (?w , ?w
), respectively, and the sample size is lower-bounded as
?2 +?2 2
x
w
b add , ?
n % max ?min
,
1
k
log
p.
Then
for
the
M
-estimator
based on the surrogates (?
badd ), the results of
(?x )
Theorems 1 and 2 hold with parameters
p
1
2
2 ,
?` = ?min (?x ) and ?(Q, ? ) = c0 ?x2 + ?w
+ ? ?x2 + ?w
2
with probability at least 1 ? c1 exp(?c2 log p).
5
For i.i.d. samples with missing data, we have the following:
Corollary 2. Suppose X ? Rn?p is a sub-Gaussian matrix with parameters (?x , ?x2 ), and Z is the missing
4
?x
1
, 1 k log p, then Theorems 1 and 2 hold with
data matrix with parameter ?. If n % max (1??)
4
?2 (?x )
min
probability at least 1 ? c1 exp(?c2 log p) for ?` = 12 ?min (?x ) and
?(Q, ? ) = c0
?x
?x
? +
.
1??
1??
Now consider the case where the rows of X are drawn from a vector autoregressive (VAR) process according to
xi+1 = Axi + vi ,
for i = 1, 2, . . . , n ? 1,
(18)
where vi ? Rp is a zero-mean noise vector with covariance matrix ?v , and A ? Rp?p is a driving matrix with
spectral norm |||A|||2 < 1. We assume the rows of X are drawn from a Gaussian distribution with covariance
?x , such that ?x = A?x AT + ?v , so the process is stationary. Corollary 3 corresponds to the case of additive
noise for a Gaussian VAR process. A similar result can be derived in the missing data setting.
Corollary 3. Suppose the rows of X are drawn according to a Gaussian VAR process with driving matrix A.
4
Suppose the additive noise matrix W is i.i.d. with Gaussian rows. If n % max ?2 ? (?x ) , 1 k log p, with
min
? 2 = |||?w |||op +
2|||?x |||op
,
1 ? |||A|||op
then Theorems 1 and 2 hold with probability at least 1 ? c1 exp(?c2 log p) for ?` = 12 ?min (?x ) and
?(Q, ? ) = c0 (? ? + ? 2 ).
3.4
Application to graphical model inverse covariance estimation
The problem of inverse covariance estimation for a Gaussian graphical model is closely related to the Lasso.
Meinshausen and B?uhlmann [20] prescribed a way to recover the support of the precision matrix ? when each
column of ? is k-sparse, via linear regression and the Lasso. More recently, Yuan [21] proposed a method for
b ? ?|||1 when
estimating ? using linear regression and the Dantzig selector, and obtained error bounds on |||?
the columns of ? are bounded in `1 . Both of these results assume the rows of X are observed noiselessly and
independently.
Suppose we are given a matrix X ? Rn?p of samples from a multivariate Gaussian distribution, where each row
is distributed according to N (0, ?). We assume the rows of X are either i.i.d. or sampled from a Gaussian VAR
process (18). Based on the modified Lasso, we devise a method to estimate ? based on a corrupted observation
matrix Z. Let X j denote the j th column of X, and let X ?j denote the matrix X with j th column removed. By
standard results on Gaussian graphical models, there exists a vector ?j ? Rp?1 such that
X j = X ?j ?j + j ,
j
j
?j
(19)
j ?1
where is a vector of i.i.d. Gaussians and ?? X . Defining aj := ?(?jj ? ?j,?j ? ) , we have
b j,?j = b
?j,?j = aj ?j . Our algorithm estimates ?bj and b
aj for each j and combines the estimates to obtain ?
aj ?bj .
In the additive noise case, we observe Z = X + W . The equations (19) yield Z j = X ?j ?j + (j + W j ).
Note that ? j = j + W j is a vector of i.i.d. Gaussians, and since X ?? W , we have ? j ?? X ?j . Hence, our
results on covariates with additive noise produce an estimate of ?j by solving the program (4) or (12) with the
b = 1 Z T Z ? ?w . When Z is a missing-data version of X,
b (j) , ?
b ?j,?j , 1 Z ?jT Z j ), where ?
pair (?
b(j) ) = (?
n
n
j
we similarly estimate the vectors ? with suitable corrections. We arrive at the following algorithm:
Algorithm 3.1.
(1) Perform p linear regressions of the variables Z j upon the remaining variables Z ?j ,
b (j) , ?
using the modified Lasso program (4) or (12) with the estimators (?
b(j) ), to obtain estimates ?bj .
b jj ? ?
b j,?j ?bj )?1 . Set ?
e j,?j = b
e jj = ?b
(2) Estimate the scalars aj using b
aj := ?(?
aj ?bj and ?
aj .
b = arg min |||? ? ?|||
e 1 , where S p is the set of symmetric p ? p matrices.
(3) Construct the matrix ?
p
??S
Note that the minimization in step (3) is a linear program, so is easily solved with standard methods. We have:
6
Corollary 4. Suppose the columns of the matrix ? are k-sparse, and suppose the condition number ?(?) is
nonzero and finite. Suppose the deviation conditions
r
r
log p
log p
(j)
j
(j)
j
b
kb
? ? ??j,?j ? k? ? ?(Q, ? )
and k(? ? ??j,?j )? k? ? ?(Q, ? )
(20)
n
n
b
hold for all j, and suppose we have the following additional deviation condition on ?:
r
b ? ?kmax ? c?(Q, ? ) log p .
k?
(21)
n
b (j) with the scaling (14). Then
Finally, suppose the lower-RE condition holds uniformly over the matrices ?
under the estimation procedure of Algorithm 3.1, there exists a universal constant c0 such that
r
c0 ?2 (?) ?(Q, ? ) ?(Q, ? )
log p
b
|||? ? ?|||op ?
+
k
.
?min (?) ?min (?)
?`
n
4
Simulations
In this section, we provide simulation results to confirm that the scalings predicted by our theory are sharp.
In Figure 1, we plot the results of simulations under the additive noise model described in Example 1, using
2
?x = I and ?w = ?w
I with ?w = 0.2. Panel (a) provides plots of `2 -error versus the sample size n, for
p ? {128, 256, 512}. For all three choices of dimensions, the error decreases to zero as the sample size n increases, showing consistency of the method. If we plot the `2 -error versus the rescaled sample size n/(k log p),
as depicted in panel (b), the curves roughly align for different values of p, agreeing with Theorem 1. Panel (c)
shows analogous curves for VAR data with additive noise, using a driving matrix A with |||A|||op = 0.2.
Additive noise
Additive noise
0.28
0.55
p=128
p=256
p=512
0.26
0.22
0.22
0.4
0.2
0.2
0.35
0.16
0.45
l2 norm error
0.24
0.18
0.18
0.16
0.3
0.25
0.14
0.14
0.2
0.12
0.12
0.15
0.1
0.08
0.1
0
500
1000
1500
n
2000
2500
3000
0.08
p=128
p=256
p=512
0.5
0.24
l2 norm error
l2 norm error
Additive noise with autoregressive data
0.28
p=128
p=256
p=512
0.26
0.1
2
4
6
8
10
12
n/(k log p)
(a)
(b)
14
16
18
20
0.05
0
2
4
6
8
10
12
n/(k log p)
14
16
18
20
(c)
?
b
Figure 1. Plots
? of the error k? ? ? k2 after running projected gradient descent on the non-convex objective, with
sparsity k ? p. Plot (a) is an error plot for i.i.d. data with additive noise, and plot (b) shows `2 -error versus the
n
rescaled sample size k log
. Plot (c) depicts a similar (rescaled) plot for VAR data with additive noise. As predicted
p
by Theorem 1, the curves align for different values of p in the rescaled plot.
We also verified the results of Theorem 2 empirically. Figure 2 shows the results of applying projected gradient
descent to solve the optimization problem (4) in the cases of additive noise and missing data. We first applied
b then reapplied projected gradient descent 10 times, tracking
projected gradient to obtain an initial estimate ?,
t
b
the optimization error k? ? ?k2 (in blue) and statistical error k? t ? ? ? k2 (in red). As predicted by Theorem 2,
the iterates exhibit geometric convergence to roughly the same fixed point, regardless of starting point.
Finally, we simulated the inverse covariance matrix estimation algorithm on three types of graphical models:
(a) Chain-structured graphs. In this case, all nodes of are arranged in a line. The diagonal entries of ?
equal 1, and entries corresponding to links in the chain equal 0.1. Then ? is rescaled so |||?|||op = 1.
(b) Star-structured graphs. In this case, all nodes are connected to a central node, which has degree
k ? 0.1p. All other nodes have degree 1. The diagonal entries of ? are set equal to 1, and all entries
corresponding to edges in the graph are set equal to 0.1. Then ? is rescaled so |||?|||op = 1.
(c) Erd?os-Renyi graphs. As in Rothman et al. [22], we first generate a matrix B with diagonal entries 0,
and all other entries independently equal to 0.5 with probability k/p, and 0 otherwise. Then ? is chosen
so ? = B + ?I has condition number p, and ? is rescaled so |||?|||op = 1.
7
Log error plot: additive noise case
Log error plot: missing data case
0.5
0.5
Stat error
Opt error
0
?0.5
?1
log(||`t ? `||2)
log(||`t ? `||2)
?0.5
?1.5
?2
?2.5
?1
?1.5
?2
?2.5
?3
?3.5
Stat error
Opt error
0
?3
0
20
40
60
Iteration count
80
?3.5
100
0
20
40
60
Iteration count
(a)
80
100
(b)
b 2 ) and statistical error log(k? t ? ? ? k2 ) versus iteration
Figure 2. Plots of the optimization error log(k? t ? ?k
number t, generated by running projected gradient descent on the non-convex objective. As predicted by Theorem 2,
the optimization error decreases geometrically.
After generating the matrix X of n i.i.d. samples from the appropriate graphical model, with covariance matrix
?x = ??1 , we generated the corrupted matrix Z = X + W with ?w = (0.2)2 I. Figure 3 shows the rescaled
b ? ?|||op plotted against the sample size n for a chain-structured graph, with panel (a) showing the
`2 -error ?1k |||?
original plot and panel (b) plotting against the rescaled sample size. We obtained qualitatively similar results
for the star and Erd?os-Renyi graphs, in the presence of missing and/or dependent data.
Chain graph
Chain graph
0.7
0.7
p=64
p=128
p=256
0.5
0.4
0.3
0.2
0.1
0
p=64
p=128
p=256
0.6
1/sqrt(k) * l2 operator norm error
1/sqrt(k) * l2 operator norm error
0.6
0.5
0.4
0.3
0.2
0.1
0
100
200
300
400
500
600
0
700
n
(a) `2 error plot for chain graph, additive noise
10
20
30
40
n/(k log p)
50
60
(b) rescaled plot
b
Figure 3. (a) Plots of the rescaled error ?1k |||???|||
op after running projected gradient descent for a chain-structured
Gaussian graphical model with additive noise. As predicted by Theorems 1 and 2, all curves align when the rescaled
n
, as shown in (b). Each point represents the average over 50 trials.
error is plotted against the ratio k log
p
5
Discussion
In this paper, we formulated an `1 -constrained minimization problem for sparse linear regression on corrupted
data. The source of corruption may be additive noise or missing data, and although the resulting objective is
not generally convex, we showed that projected gradient descent is guaranteed to converge to a point within
statistical precision of the optimum. In addition, we established `1 - and `2 -error bounds that hold with high
probability when the data are drawn i.i.d. from a sub-Gaussian distribution, or drawn from a Gaussian VAR
process. Finally, we used our results on linear regression to perform sparse inverse covariance estimation for a
Gaussian graphical model, based on corrupted data. The bounds we obtain for the spectral norm of the error are
of the same order as existing bounds for inverse covariance matrix estimation with uncorrupted, i.i.d. data.
Acknowledgments
PL acknowledges support from a Hertz Foundation Fellowship and an NDSEG Fellowship; MJW and PL were
also partially supported by grants NSF-DMS-0907632 and AFOSR-09NL184. The authors thank Alekh Agarwal, Sahand Negahban, and John Duchi for discussions and guidance.
8
References
[1] R. Little and D. B. Rubin. Statistical analysis with missing data. Wiley, New York, 1987.
[2] J. T. Hwang. Multiplicative errors-in-variables models with applications to recent data released by the
U.S. Department of Energy. Journal of the American Statistical Association, 81(395):pp. 680?688, 1986.
[3] R. J. Carroll, D. Ruppert, and L. A. Stefanski. Measurement Error in Nonlinear Models. Chapman and
Hall, 1995.
[4] S. J. Iturria, R. J. Carroll, and D. Firth. Polynomial regression and estimating functions in the presence of
multiplicative measurement error. Journal of the Royal Statistical Society Series B - Statistical Methodology, 61:547?561, 1999.
[5] Q. Xu and J. You. Covariate selection for linear errors-in-variables regression models. Communications
in Statistics - Theory and Methods, 36(2):375?386, 2007.
[6] N. St?adler and P. B?uhlmann. Missing values: Sparse inverse covariance estimation and an extension to
sparse regression. Statistics and Computing, pages 1?17, 2010.
[7] M. Rosenbaum and A. B. Tsybakov. Sparse recovery under matrix uncertainty. Annals of Statistics,
38:2620?2651, 2010.
[8] P. Loh and M.J. Wainwright. High-dimensional regression with noisy and missing data: Provable
guarantees with non-convexity. Technical report, UC Berkeley, September 2011. Available at http:
//arxiv.org/abs/1109.3714.
[9] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society,
Series B, 58(1):267?288, 1996.
[10] S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on
Scientific Computing, 20(1):33?61, 1998.
[11] S. van de Geer. The deterministic Lasso. In Proceedings of Joint Statistical Meeting, 2007.
[12] P. J. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Annals of
Statistics, 37(4):1705?1732, 2009.
[13] S. van de Geer and P. Buhlmann. On the conditions used to prove oracle results for the Lasso. Electronic
Journal of Statistics, 3:1360?1392, 2009.
[14] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted eigenvalue properties for correlated Gaussian designs.
Journal of Machine Learning Research, 11:2241?2259, 2010.
[15] A. Agarwal, S. Negahban, and M.J. Wainwright. Fast global convergence of gradient methods for highdimensional statistical recovery. Technical report, UC Berkeley, April 2011. Available at http://
arxiv.org/abs/1104.4824.
[16] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the `1 -ball for learning
in high dimensions. In International Conference on Machine Learning, pages 272?279, 2008.
[17] C. H. Zhang and J. Huang. The sparsity and bias of the Lasso selection in high-dimensional linear regression. Annals of Statistics, 36(4):1567?1594, 2008.
[18] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional data.
Annals of Statistics, 37(1):246?270, 2009.
[19] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for the analysis of regularized M -estimators. In Advances in Neural Information Processing Systems, 2009.
[20] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals
of Statistics, 34:1436?1462, 2006.
[21] M. Yuan. High-dimensional inverse covariance matrix estimation via linear programming. Journal of
Machine Learning Research, 99:2261?2286, August 2010.
[22] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation.
Electronic Journal of Statistics, 2:494?515, 2008.
9
| 4454 |@word trial:1 version:2 polynomial:6 norm:15 c0:7 km:1 simulation:5 tat:1 covariance:15 contraction:1 decomposition:1 initial:1 series:3 zij:4 past:1 wainwrig:1 existing:1 john:1 additive:20 realistic:1 drop:1 plot:17 stationary:2 ith:1 iterates:4 provides:2 node:4 org:2 zhang:1 c2:5 become:1 yuan:2 prove:6 shorthand:1 combine:1 introduce:1 manner:1 indeed:1 roughly:2 behavior:2 xti:2 little:1 begin:3 estimating:2 underlying:2 notation:1 moreover:1 bounded:3 panel:5 developed:1 unified:1 guarantee:8 berkeley:8 voting:1 k2:9 control:1 unit:1 grant:1 yn:1 positive:5 understood:1 local:2 tends:1 consequence:3 despite:1 therein:2 studied:3 examined:1 dantzig:3 suggests:1 meinshausen:3 practical:1 unique:1 acknowledgment:1 atomic:1 practice:1 procedure:3 universal:3 projection:3 onto:2 close:5 selection:5 operator:4 context:4 applying:3 kmax:2 impossible:1 deterministic:5 missing:37 regardless:1 starting:1 independently:7 convex:15 survey:1 recovery:4 estimator:16 proving:2 analogous:2 annals:5 badd:2 suppose:15 controlling:1 user:1 programming:1 agreement:1 observed:10 solved:1 connected:1 decrease:2 removed:1 rescaled:12 mentioned:1 vanishes:1 convexity:7 covariates:10 motivate:1 solving:2 tit:1 serve:1 upon:1 basis:1 po:1 easily:2 joint:1 various:6 fast:2 describe:1 neighborhood:1 shalev:1 saunders:1 heuristic:1 solve:3 say:2 otherwise:2 statistic:11 cov:1 noisy:16 sequence:1 eigenvalue:8 propose:1 remainder:1 relevant:1 rapidly:1 description:3 convergence:2 optimum:14 diverges:1 produce:3 generating:1 converges:3 illustrate:2 develop:2 derive:1 stat:3 depending:2 op:11 b0:3 strong:1 predicted:6 involves:1 rosenbaum:2 radius:4 closely:2 kb:2 opt:2 rothman:2 extension:1 pl:2 correction:1 hold:9 mjw:1 sufficiently:1 hall:1 exp:4 bj:5 driving:3 bickel:2 released:1 estimation:10 uhlmann:4 minimization:2 sensor:2 gaussian:20 modified:4 rather:1 shrinkage:1 corollary:6 derived:1 focus:2 rank:2 likelihood:3 contrast:2 dependent:9 minimizers:1 interested:2 issue:4 arg:5 denoted:1 logn:2 constrained:4 special:1 uc:2 equal:5 construct:1 chapman:1 represents:1 yu:3 report:3 n1:1 ab:2 interest:1 semidefinite:3 devoted:1 held:1 chain:7 edge:1 re:10 plotted:2 guidance:2 theoretical:5 politician:1 rsc:1 instance:2 column:6 modeling:1 earlier:1 maximization:1 deviation:5 entry:8 subset:1 predictor:1 examining:1 dependency:2 eec:1 corrupted:7 adler:2 st:2 international:1 negahban:3 siam:1 sequel:1 together:1 central:1 satisfied:1 ndseg:1 huang:1 possibly:1 book:2 american:1 de:2 star:2 coefficient:1 satisfy:4 vi:2 multiplicative:2 analyze:1 linked:2 observing:1 red:1 recover:1 contribution:1 ass:1 ni:1 correspond:1 yield:1 corruption:2 sqrt:2 simultaneous:1 definition:1 failure:1 against:3 energy:1 pp:1 dm:1 associated:4 proof:1 mi:3 sampled:4 proved:1 recall:1 ut:1 reapplied:1 organized:1 noiselessly:1 methodology:1 response:1 erd:2 april:1 formulation:2 arranged:1 ritov:1 furthermore:2 o:2 nonlinear:1 aj:8 scientific:1 hwang:1 k22:3 unbiased:2 hence:3 regularization:1 symmetric:1 nonzero:1 attractive:1 generalized:1 demonstrate:1 duchi:3 rsm:1 meaning:1 novel:1 recently:1 raskutti:1 empirically:1 blas:3 association:1 elementwise:1 sub2:1 measurement:3 refer:1 smoothness:2 consistency:2 similarly:2 hxi:1 alekh:1 carroll:2 add:4 align:3 curvature:1 multivariate:1 showed:1 recent:3 optimizing:1 meeting:1 abstention:1 yi:3 devise:1 uncorrupted:1 minimum:3 additional:2 converge:6 novelty:1 signal:1 reduces:1 technical:3 levina:1 plug:1 long:2 ravikumar:1 prediction:3 involving:4 regression:17 variant:1 noiseless:2 expectation:1 chandra:1 arxiv:2 iteration:3 agarwal:3 nl184:1 c1:5 background:2 remarkably:1 addition:4 fellowship:2 source:1 induced:2 near:2 presence:2 easy:1 ture:1 hb:3 variety:1 iterate:1 zi:6 perfectly:1 lasso:20 reduce:1 idea:1 sahand:1 loh:2 suffer:1 york:1 cause:2 jj:3 remark:1 generally:2 detailed:1 involve:2 proportionally:1 tsybakov:3 generate:1 http:2 xij:2 nsf:1 tibshirani:1 blue:1 write:2 drawn:8 verified:1 graph:10 geometrically:2 fraction:1 sum:1 idealization:1 inverse:8 you:1 respond:1 uncertainty:1 arrive:1 family:2 reader:1 reasonable:3 electronic:2 scaling:5 bound:17 guaranteed:1 quadratic:1 oracle:1 adapted:1 infinity:1 constraint:2 x2:4 collusion:1 generates:1 min:14 extremely:1 prescribed:1 martin:1 department:3 structured:4 according:5 ball:2 hertz:1 em:7 agreeing:1 wi:3 restricted:8 invariant:1 handling:1 computationally:1 equation:5 previously:1 turn:1 count:2 fail:1 singer:1 available:2 gaussians:2 stefanski:1 pursuit:1 apply:2 observe:9 spectral:3 appropriate:1 appearing:1 stepsize:2 subtracted:2 rp:14 original:1 denotes:2 remaining:1 running:3 graphical:8 k1:10 establish:3 approximating:1 classical:2 society:2 objective:3 question:1 quantity:1 usual:1 diagonal:4 surrogate:5 exhibit:2 gradient:20 september:1 distance:1 link:1 thank:1 simulated:1 provable:2 ratio:2 difficult:4 setup:1 statement:1 negative:3 stated:1 design:2 unknown:1 perform:2 allowing:2 upper:4 observation:4 finite:1 descent:17 defining:1 communication:1 precise:1 y1:1 rn:8 perturbation:1 sharp:2 august:1 buhlmann:1 introduced:1 pair:8 required:1 california:2 established:1 able:3 regime:2 sparsity:4 challenge:1 program:21 including:3 max:6 royal:2 wainwright:5 suitable:1 natural:1 hybrid:1 disturbance:1 regularized:4 zhu:1 minimax:1 firth:1 imply:1 temporally:1 acknowledges:1 geometric:2 l2:5 asymptotic:1 afosr:1 fully:4 loss:1 permutation:1 interesting:1 var:8 versus:4 foundation:1 degree:2 sufficient:1 minp:1 principle:1 plotting:1 rubin:1 row:10 supported:2 side:2 allow:2 weaker:1 formal:1 bias:1 absolute:1 sparse:15 tolerance:3 distributed:1 curve:4 dimension:3 axi:1 van:2 autoregressive:3 author:1 qualitatively:1 projected:18 approximate:1 selector:3 dealing:1 confirm:2 global:18 active:1 instantiation:2 assumed:1 xi:11 shwartz:1 nature:1 ca:2 inherently:1 diag:1 main:3 bmi:2 ling:1 noise:25 allowed:1 body:1 xu:1 depicts:1 wiley:1 precision:2 sub:6 renyi:2 theorem:17 specific:2 covariate:4 jt:1 showing:4 k21:3 maxi:1 closeness:1 exists:2 chen:1 depicted:1 eij:1 tracking:1 partially:3 scalar:1 applies:1 mij:1 corresponds:1 minimizer:3 satisfies:7 ploh:1 conditional:2 formulated:1 consequently:1 donoho:1 ruppert:1 uniformly:2 called:1 geer:2 diverging:1 la:7 vote:1 highdimensional:2 support:2 correlated:1 |
3,816 | 4,455 | k-NN Regression Adapts to Local Intrinsic Dimension
Samory Kpotufe
Max Planck Institute for Intelligent Systems
[email protected]
Abstract
Many nonparametric regressors were recently shown to converge at rates that depend only on the intrinsic dimension of data. These regressors thus escape the
curse of dimension when high-dimensional data has low intrinsic dimension (e.g.
a manifold). We show that k-NN regression is also adaptive to intrinsic dimension. In particular our rates are local to a query x and depend only on the way
masses of balls centered at x vary with radius.
Furthermore, we show a simple way to choose k = k(x) locally at any x so as to
nearly achieve the minimax rate at x in terms of the unknown intrinsic dimension
in the vicinity of x. We also establish that the minimax rate does not depend on a
particular choice of metric space or distribution, but rather that this minimax rate
holds for any metric space and doubling measure.
1
Introduction
We derive new rates of convergence in terms of dimension for the popular approach of Nearest
Neighbor (k-NN) regression. Our motivation is that, for good performance, k-NN regression can
require a number of samples exponential in the dimension of the input space X . This is the so-called
?curse of dimension?. Formally stated, the curse of dimension is the fact that, for any nonparametric
regressor there exists a distribution in RD such that, given a training size n, the regressor converges
at a rate no better than n?1/O(D) (see e.g. [1, 2]).
Fortunately it often occurs that high-dimensional data has low intrinsic dimension: typical examples
are data lying near low-dimensional manifolds [3, 4, 5]. We would hope that in these cases nonparametric regressors can escape the curse of dimension, i.e. their performance should only depend
on the intrinsic dimension of the data, appropriately formalized. In other words, if the data in RD
has intrinsic dimension d << D, we would hope for a better convergence rate of the form n?1/O(d)
instead of n?1/O(D) . This has recently been shown to indeed be the case for methods such as kernel
regression [6], tree-based regression [7] and variants of these methods [8]. In the case of k-NN
regression however, it is only known that 1-NN regression (where k = 1) converges at a rate that depends on intrinsic dimension [9]. Unfortunately 1-NN regression is not consistent. For consistency,
it is well known that we need k to grow as a function of the sample size n [10] .
Our contributions are the following. We assume throughout that the target function f is Lipschitz.
First we show that, for a wide range of values of k ensuring consistency, k-NN regression converges
at a rate that only depends on the intrinsic dimension in a neighborhood of a query x. Our local
notion of dimension in a neighborhood of a point x relies on the well-studied notion of doubling
measure (see Section 2.3). In particular our dimension quantifies how the mass of balls vary with
radius, and this captures standard examples of data with low intrinsic dimension. Our second, and
perhaps most important contribution, is a simple procedure for choosing k = k(x) so as to nearly
achieve the minimax rate of O n?2/(2+d) in terms of the unknown dimension d in a neighborhood
of x. Our final contribution is in showing that this minimax rate holds for any metric space and
doubling measure. In other words the hardness of the regression problem is not tied to a particular
1
choice of metric space X or doubling measure ?, but depends only on how the doubling measure
?
expands on a metric space X . Thus, for any marginal ? on X with expansion constant ? 2d , the
minimax rate for the measure space (X , ?) is ? n?2/(2+d) .
1.1 Discussion
It is desirable to express regression rates in terms of a local notion of dimension rather than a global
one because the complexity of data can vary considerably over regions of space. Consider for example a dataset made up of a collection of manifolds of various dimensions. The global complexity
is necessarily of a worst case nature, i.e. is affected by the most complex regions of the space while
we might happen to query x from a less complex region. Worse, it can be the case that the data
is not complex locally anywhere, but globally the data is more complex. A simple example of this
is a so-called space filling curve where a low-dimensional manifold curves enough that globally it
seems to fill up space. We will see that the global complexity does not affect the behavior of k-NN
regression, provided k/n is sufficiently small. The behavior of k-NN regression is rather controlled
by the often smaller local dimension in a neighborhood B(x, r) of x, where the neighborhood size
r shrinks with k/n.
Given such a neighborhood B(x, r) of x, how does one choose k = k(x) optimally relative to the
unknown local dimension in B(x, r)? This is nontrivial as standard methods of (global) parameter
selection do not easily apply. For instance, it is unclear how to choose k by cross-validation over
possible settings: we do not know reliable surrogates for the true errors at x of the various estimators
{fn,k (x)}k?[n] . Another possibility is to estimate the dimension of the data in the vicinity of x, and
use this estimate to set k. However, for optimal rates, we have to estimate the dimension exactly and
we know of no finite sample result that guarantees the exact estimate of intrinsic dimension. Our
method consists of finding a value of k that balances quantities which control estimator variance and
bias at x, namely 1/k and distances to x?s k nearest neighbors. The method guarantees, uniformly
e n?2/(2+d) where d = d(x) is exactly the unknown local
over all x ? X , a near optimal rate of O
dimension on a neighborhood B(x, r) of x, where r ? 0 as n ? ?.
2
Setup
n
We are given n i.i.d samples (X, Y) = {(Xi , Yi )}i=1 from some unknown distribution where the
input variable X belongs to a metric space (X , ?), and the output Y is a real number. We assume
that the class B of balls on (X , ?) has finite VC dimension VB . This is true for instance for any
subset X of a Euclidean space, e.g. the low-dimensional spaces discussed in Section 2.3. The VC
assumption is however irrelevant to the minimax result of Theorem 3.
We denote the marginal distribution on X by ? and the empirical distribution on X by ?n .
2.1
Regression function and noise
The regression function f (x) = E [Y |X = x] is assumed to be ?-Lipschitz, i.e. there exists ? > 0
such that ?x, x0 ? X , |f (x) ? f (x0 )| ? ?? (x, x0 ).
We assume a simple but general noise model: the distributions of the noise at points x ? X have
uniformly bounded tails and variance. In particular, Y is allowed to be unbounded. Formally:
?? > 0 there exists t > 0 such that sup PY |X=x (|Y ? f (x)| > t) ? ?.
x?X
We denote by tY (?) the infimum over all such t. Also, we assume that the variance of (Y |X = x)
is upper-bounded by a constant ?Y2 uniformly over all x ? X .
To illustrate our noise assumptions, consider for instance the standard assumption of bounded noise,
i.e. |Y ? f (x)| is uniformly bounded by some M > 0; then ?? > 0, tY (?) ? M , and can thus be
replaced by M in all our results. Another standard assumption is that where the noise distribution
has exponentially decreasing tail; in this case ?? > 0, tY (?) ? O(ln 1/?). As a last example,
in the
p
case of Gaussian (or sub-Gaussian) noise, it?s not hard to see that ?? > 0, tY (?) ? O( ln 1/?).
2
2.2
Weighted k-NN regression estimate
We assume a kernel function K : R+ 7? R+ , non-increasing, such that K(1) > 0, and K(?) = 0
for ? > 1. For x ? X , let rk,n (x) denote the distance to its k?th nearest neighbor in the sample X.
The regression estimate at x given the n-sample (X, Y) is then defined as
X K (?(x, xi )/rk,n (x))
X
P
fn,k (x) =
Yi =
wi,k (x)Yi .
j K (?(x, xj )/rk,n (x))
i
i
2.3
Notion of dimension
We start with the following definition of doubling measure which will lead to the notion of local
dimension used in this work. We stay informal in developing the motivation and refer the reader to
[?, 11, 12] for thorough overviews of the topic of metric space dimension and doubling measures.
Definition 1. The marginal ? is a doubling measure if there exist Cdb > 0 such that for any x ? X
and r ? 0, we have ?(B(x, r)) ? Cdb ?(B(x, r/2)). The quantity Cdb is called an expansion
constant of ?.
An equivalent definition is that, ? is doubling if there exist C and d such that for any x ? X , for any
r ? 0 and any 0 < < 1, we have ?(B(x, r)) ? C?d ?(B(x, r)). Here d acts as a dimension. It
is not hard to show that d can be chosen as log2 Cdb and C as Cdb (see e.g. [?]).
A simple example of a doubling measure is the Lebesgue volume in the Euclidean space Rd . For any
x ? Rd and r > 0, vol (B(x, r)) = vol (B(x, 1)) rd . Thus vol (B(x, r)) / vol (B(x, r)) = ?d
for any x ? Rd , r > 0 and 0 < < 1. Building upon the doubling behavior of volumes in Rd ,
we can construct various examples of doubling probability measures. The following ingredients are
sufficient. Let X ? RD be a subset of a d-dimensional hyperplane, and let X satisfy for all balls
B(x, r) with x ? X , vol (B(x, r) ? X ) = ?(rd ), the volume being with respect to the containing
hyperplane. Now let ? be approximately uniform, that is ? satisfies for all such balls B(x, r),
?(B(x, r) ? X ) = ?(vol (B(x, r) ? X )). We then have ?(B(x, r))/?(B(x, r)) = ?(?d ).
Unfortunately a global notion of dimension such as the above definition of d is rather restrictive as
it requires the same complexity globally and locally. However a data space can be complex globally
and have small complexity locally. Consider for instance a d-dimensional submanifold X of RD ,
and let ? have an upper and lower bounded density on X . The manifold might be globally complex
but the restriction of ? to a ball B(x, r), x ? X , is doubling with local dimension d, provided r is
sufficiently small and certain conditions on curvature hold. This is because, under such conditions
(see e.g. the Bishop-Gromov theorem [13]), the volume (in X ) of B(x, r) ? X is ?(rd ).
The above example motivates the following definition of local dimension d.
Definition 2. Fix x ? X , and r > 0. Let C ? 1 and d ? 1. The marginal ? is (C, d)-homogeneous
on B(x, r) if we have ?(B(x, r0 )) ? C?d ?(B(x, r0 )) for all r0 ? r and 0 < < 1.
The above definition covers cases other than manifolds. In particular, another space with small local
dimension is a sparse data space X ? RD where each vector x has at most d non-zero coordinates,
i.e. X is a collection of finitely many
P hyperplanes of dimension at most d. More generally suppose
the data distribution ? is a mixture i ?i ?i of finitely many distributions ?i with potentially different low-dimensional supports. Then if all ?i supported on a ball B are (Ci , d)-homogeneous on B,
i.e. all have local dimension d on B, then ? is also (C, d)-homogeneous on B for some C.
We want rates of convergence which hold uniformly over all regions where ? is doubling. We
therefore also require (Definition 3) that C and d from Definition 2 are uniformly upper bounded.
This will be the case in many situations including the above examples.
Definition 3. The marginal ? is (C0 , d0 )-maximally-homogeneous for some C0 ? 1 and d0 ? 1,
if the following holds for all x ? X and r > 0: suppose there exists C ? 1 and d ? 1 such that ? is
(C, d)-homogeneous on B(x, r), then ? is (C0 , d0 )-homogeneous on B(x, r).
We note that, rather than assuming as in Definition 3 that all local dimensions are at most d0 , we
can express our results in terms of the subset of X where local dimensions are at most d0 . In this
case d0 would be allowed to grow with n. The less general assumption of Definition 3 allows for a
clearer presentation which still captures the local behavior of k-NN regression.
3
3
Overview of results
3.1
Local rates for fixed k
The first result below establishes the rates of convergence for any k & ln n in terms of the (unknown)
complexity on B(x, r) where r is any r satisfying ?(B(x, r)) > ?(k/n) (we need at least ?(k)
samples in the relevant neighborhoods of x).
Theorem 1. Suppose ? is (C0 , d0 )-maximally-homogeneous, and B has finite VC dimension VB .
Let 0 < ? < 1. With probability at least 1 ? 2? over the choice of (X, Y), the following holds
simultaneously for all x ? X and k satisfying n > k ? VB ln 2n + ln(8/?).
Pick any x ? X . Let r > 0 satisfy ?(B(x, r)) > 3C0 k/n. Suppose ? is (C, d)-homogeneous on
B(x, r), with 1 ? C ? C0 and 1 ? d ? d0 . We have
2K(0) VB ? t2Y (?/2n) ? ln(2n/?) + ?Y2
?
+ 2?2 r2
|fn,k (x) ? f (x)| ?
K(1)
k
2
3Ck
n?(B(x, r))
2/d
.
Note that the above rates hold uniformly over x, k & ln n, and any r where ?(B(x, r)) ? ?(k/n).
The rate also depends on ?(B(x, r)) and suggests that the best scenario is that where x has a small
neighborhood of large mass and small dimension d.
3.2
Minimax rates for a doubling measure
Our next result shows that the hardness of the regression problem is not tied to a particular choice
of the metric X or the doubling measure ?. The result relies mainly on the fact that ? is doubling on
X . We however assume that ? has the same expansion constant everywhere and that this constant
is tight. This does not however make the lower-bound less expressive, as it still tells us which rates
to expect locally.
Thus if ? is (C, d)-homogeneous near x, we cannot expect a better rate than
O n?2/(2+d) (assuming a Lipschitz regression function f ).
Theorem 2. Let ? be a doubling measure on a metric space (X , ?) of diameter 1, and suppose ?
satisfies, for all x ? X , for all r > 0 and 0 < < 1,
C1 ?d ?(B(x, r)) ? ?(B(x, r)) ? C2 ?d ?(B(x, r)),
where C1 , C2 and d are positive constants independent of x, r, and . Let Y be a subset of R and
let ? > 0. Define D?,? as the class of distributions on X ? Y such that X ? ? and the output
Y = f (X) + N (0, 1) where f is any ?-Lipschitz function from X to Y. Fix a sample size n > 0 and
let fn denote any regressor on samples (X, Y) of size n, i.e. fn maps any such sample to a function
fn|(X,Y) (?) : X 7? Y in L2 (?). There exists a constant C independent of n and ? such that
2
E X,Y,x fn|(X,Y) (x) ? f (x)
inf sup
? C.
{fn } D?,?
?2d/(2+d) n?2/(2+d)
3.3
Choosing k for near-optimal rates at x
Our last result shows a practical and simple way to choose k locally so as to nearly achieve the
minimax rate at x, i.e. a rate that depends on the unknown local dimension in a neighborhood
B(x, r) of x, where again, r satisfies ?(B(x, r)) > ?(k/n) for good choices of k. It turns out that
we just need ?(B(x, r)) > ?(n?1/3 ).
As we will see, the choice of k simply consists of monitoring the distances from x to its nearest
neighbors. The intuition, similar to that of a method for tree-pruning in [7], is to look for a k that
2
balances the variance (roughly 1/k) and the square bias (roughly rk,n
(x)) of the estimate. The
procedure is as follows:
Choosing k at x: Pick ? ? maxi ? (x, Xi ), and pick ?n,? ? ln n/?.
Let k1 be the highest integer in [n] such that ?2 ? ?n,? /k1 ? rk21 ,n (x).
Define k2 = k1 + 1 and choose k as arg minki ,i?[2] ?n,? /ki + rk2i ,n (x) .
4
The parameter ?n,? guesses how the noise in Y affects the risk. This will soon be clearer. Performance guarantees for the above procedure are given in the following theorem.
Theorem 3. Suppose ? is (C0 , d0 )-maximally-homogeneous, and B has finite VC dimension VB .
Assume k is chosen for each x ? X using the above procedure, and let fn,k (x) be the corresponding
estimate. Let 0 < ? < 1 and suppose n4/(6+3d0 ) > (VB ln 2n + ln(8/?)) /?n,? . With probability at
least 1 ? 2? over the choice of (X, Y), the following holds simultaneously for all x ? X .
Pick any x ? X . Let 0 < r < ? satisfy ?(B(x, r)) > 6C0 n?1/3 . Suppose ? is (C, d)-homogeneous
on B(x, r), with 1 ? C ? C0 and 1 ? d ? d0 . We have
2/(2+d)
2Cn,?
3C?n,?
2
2
2
|fn,k (x) ? f (x)| ?
+ 2?
,
1 + 4?
?n,?
n?(B(x, r))
where Cn,? = VB ? t2Y (?/2n) ? ln(2n/?) + ?Y2 K(0)/K(1).
Suppose we set ?n,? = ln2 n/?. Then, as per the discussion in Section 2.1, if the noise in Y is
Gaussian, we have t2Y (?/2n) = O(ln n/?), and therefore the factor Cn,? /?n,? = O(1). Thus
ideally we want to set ?n,? to the order of (t2Y (?/2n) ? ln n/?).
Just as in Theorem 1, the rates of Theorem 3 hold uniformly for all x ? X , and all 0 < r < ?
where ?(B(x, r)) > ?(n?1/3 ). For any such r, let us call B(x, r) an admissible neighborhood. It is
clear that, as n grows to infinity, w.h.p. any neighborhood B(x, r) of x, 0 < r < supx0 ?X ? (x, x0 ),
becomes admissible. Once a neighborhood B(x, r) is admissible for some n, our procedure nearly
attains the minimax rates in terms of the local dimension on B(x, r), provided ? is doubling on
B(x, r). Again, the mass of an admissible neighborhood affects the rate, and the bound in Theorem
3 is best for an admissible neighborhood with large mass ?(B(x, r)) and small dimension d.
4
Analysis
Define fen,k (x) = EY|X fn,k (x) =
point x in a standard way as
P
i
wi,k (x)f (Xi ). We will bound the error of the estimate at a
2
2
2
|fn,k (x) ? f (x)| ? 2 fn,k (x) ? fen,k (x) + 2 fen,k (x) ? f (x) .
(1)
Theorem 1 is therefore obtained by combining bounds on the above two r.h.s terms (variance and
bias). These terms are bounded separately in Lemma 2 and Lemma 3 below.
4.1
Local rates for fixed k: bias and variance at x
In this section we bound the bias and variance terms of equation (1) with high probability, uniformly
over x ? X . We will need the following lemma which follows easily from standard VC theory [14]
results. The proof is given in the long version [15].
Lemma 1. Let B denote the class of balls on X , with VC-dimension VB . Let 0 < ? < 1, and define
?n = (VB ln 2n + ln(8/?)) /n. The following holds with probability at least 1 ? ? for all balls in
B. Pick any a ? ?n . Then ?(B) ? 3a =? ?n (B) ? a and ?n (B) ? 3a =? ?(B) ? a.
We start with the bias which is simpler to handle: it is easy to show that the bias of the estimate
at x depends on the radius rk,n (x). This radius can then be bounded, first in expectation using the
doubling assumption on ?, then by calling on the above lemma to relate this expected bound to
rk,n (x) with high probability.
Lemma 2 (Bias). Suppose ? is (C0 , d0 )-maximally-homogeneous. Let 0 < ? < 1. With probability
at least 1 ? ? over the choice of X, the following holds simultaneously for all x ? X and k satisfying
n > k ? VB ln 2n + ln(8/?).
Pick any x ? X . Let r > 0 satisfy ?(B(x, r)) > 3C0 k/n. Suppose ? is (C, d)-homogeneous on
B(x, r), with 1 ? C ? C0 and 1 ? d ? d0 . We have:
2/d
2
3Ck
e
2 2
.
fn,k (x) ? f (x) ? ? r
n?(B(x, r))
5
Proof. First fix X, x ? X and k ? [n]. We have:
X
X
e
wi,k (x) (f (Xi ) ? f (x)) ?
wi,k (x) |f (Xi ) ? f (x)|
fn,k (x) ? f (x) =
i
i
X
?
wi,k (x)?? (Xi , x) ? ?rk,n (x).
(2)
i
We therefore just need to bound rk,n (x). We proceed as follows.
Fix x ? X and k and pick any r > 0 such that ?(B(x, r)) > 3C0 k/n. Suppose ? is (C, d)homogeneous on B(x, r), with 1 ? C ? C0 and 1 ? d ? d0 . Define
.
=
3Ck
n?(B(x, r))
1/d
,
so that < 1 by the bound on ?(B(x, r)); then by the local doubling assumption on B(x, r),
we have ?(B(x, r)) ? C ?1 d ?(B(x, r)) ? 3k/n. Let ?n as defined in Lemma 1, and assume
k/n ? ?n (this is exactly the assumption on k in the lemma statement). By Lemma 1, it follows that
with probability at least 1 ? ? uniform over x, r and k thus chosen, we have ?n ((B(x, r)) ? k/n
implying that rk,n (x) ? r. We then conclude with the lemma statement by using equation (2).
Lemma 3 (Variance). Let 0 < ? < 1. With probability at least 1 ? 2? over the choice of (X, Y),
the following then holds simultaneously for all x ? X and k ? [n]:
2
K(0) VB ? t2Y (?/2n) ? ln(2n/?) + ?Y2
?
.
fn,k (x) ? fen,k (x) ?
K(1)
k
Proof. First, condition on X fixed. For any x ? X , k ? [k], let Yx,k denote the subset of Y
corresponding to points from X falling in B(x, rk,n (x)). For X fixed, the number of such subsets
Yx,k is therefore at most the number of ways we can intersect balls in B with the sample X; this is
in turn upper-bounded by nVB as is well-known in VC theory.
.
Let ?(Yx,k ) = fn,k (x) ? fen,k (x). We?ll proceed by showing that with high probability, for all
x ? X , ?(Yx,k ) is close to its expectation, then we bound this expectation.
Let ?0 ? 1/2n. We further condition on the event Y?0 that for all n samples Yi , |Yi ? f (Xi )| ?
tY (?0 ). By definition of tY (?0 ), the event Y?0 happens with probability at least 1 ? n?0 ? 1/2 . It
follows that for any x ? X
E ?(Yx,k ) ? P (Y?0 ) ? E ?(Yx,k ) ?
Y? 0
1
E ?(Yx,k ),
2 Y? 0
where EY?0 [?] denote conditional expectation under the event. Let > 0, we in turn have
P (?x, k, ?(Yx,k ) > 2E ?(Yx,k ) + ) ? P ?x, k, ?(Yx,k ) > E ?(Yx,k ) +
Y? 0
? PY?0 ?x, k, ?(Yx,k ) > E ?(Yx,k ) + + n?0 .
Y? 0
This last probability can be bounded by applying McDiarmid?s inequality: changing any Yi value
changes ?(Yx,k ) by at most wi,k ? tY (?0 ) when we condition on the event Y?0 . This, followed by a
union-bound yields
(
)
X
VB
2 2
2
wi,k .
PY?0 ?x, k, ?(Yx,k ) > E ?(Yx,k ) + ? n exp ?2 /tY (?0 )
Y? 0
i
6
Combining with the above we get
(
P (?x ? X , ?(Yx,k ) > 2E ?(Yx,k ) + ) ? n
VB
exp ?2
2
/t2Y
(?0 )
X
)
2
wi,k
+ n?0 .
i
In other words, let ?0 = ?/2n, with probability at least 1 ? ?, for all x ? X and k ? [n]
!
2
2
X
2
wi,k
fn,k (x) ? fen,k (x) ? 8 E fn,k (x) ? fen,k (x) + t2Y (?/2n) VB ln(2n/?)
Y|X
i
2
X
2
wi,k
? 8 E fn,k (x) ? fen,k (x) + t2Y (?/2n) VB ln(2n/?)
Y|X
!
,
i
where the second inequality is an application of Jensen?s.
We bound the above expectation on the r.h.s. next. In what follows (second equality below) we use
P
P
2
2
the fact that for i.i.d random variables zi with zero mean, E | i zi | = i E |zi | . We have
2
X
2
wi,k (x) (Yi ? f (Xi ))
E fn,k (x) ? fen,k (x) = E
Y|X
Y|X
i
X
X
2
2
2
=
wi,k
(x) E |Yi ? f (Xi )| ?
wi,k
(x)?Y2 .
Y|X
i
i
Combining with the previous bound we get that, with probability at least 1 ? ?, for all x and k,
2
X 2
wi,k (x).
(3)
fn,k (x) ? fen,k (x) ? VB ? t2Y (?/2n) ? ln(2n/?) + ?Y2 ?
We can now bound
X
i
i
P
2
i wi,k (x)
as follows:
K (?(x, xi )/rk,n (x))
K(0)
2
wi,k
(x) ? max wi,k (x) = max P
?P
K
(?(x,
x
)/r
(x))
K
(?(x,
xj )/rk,n (x))
i?[n]
i?[n]
j
k,n
j
j
K(0)
K(0)
?
.
K
(?(x,
x
)/r
(x))
K(1)k
j
k,n
xj ?B(x,rk,n (x))
?P
Plug this back into equation 3 and conclude.
4.2
Minimax rates for a doubling measure
The minimax rates of theorem 2 (proved in the long version [15]) are obtained as is commonly
done by constructing a regression problem that reduces to the problem of binary classification (see
e.g. [1, 2, 10]). Intuitively the problem of classification is hard in those instances where labels (say
?1, +1) vary wildly over the space X , i.e. close points can have different labels. We make the
regression problem similarly hard. We will consider a class of candidate regression functions such
that each function f alternates between positive and negative in neighboring regions (f is depicted
as the dashed line below).
+
?
The reduction relies on the simple observation that for a regressor fn to approximate the right f
from data it needs to at least identify the sign of f in the various regions of space. The more we can
make each such f change between positive and negative, the harder the problem. We are however
constrained in how much f changes since we also have to ensure that each f is Lipchitz continuous.
7
4.3
Choosing k for near-optimal rates at x
Proof of Theorem 3. Fix x and let r, d, C as defined in the theorem statement. Define
2/(2+d)
1/d
n?(B(x, r))
3C?
. d/(2+d)
.
? = ?n,?
?
and =
.
3C
n?(B(x, r))
Note that, by our assumptions,
?
n2/(2+d)
? 6C .
(4)
n
n
The above equation (4) implies < 1. Thus, by the homogeneity assumption on B(x, r),
?(B(x, r)) ? C ?1 d ?(B(x, r)) ? 3?/n. Now by the first inequality of (4) we also have
?(B(x, r)) > 6C?n,? n?1/3 ? 6C?n,? n?d/(2+d) = 6C?n,?
?n,? 4/(6+3d)
?n,? 4/(6+3d0 )
?
?
n
?
n
? ?n ,
n
n
n
where ?n = (VB ln 2n + ln(8/?)) /n is as defined in Lemma 1. We can thus apply Lemma 1 to
have that, with probability at least 1 ? ?, ?n (B(x, r)) ? ?/n. In other words, for any k ? ?,
rk,n (x) ? r. It follows that if k ? ?,
2/d
?2 ? ?n,?
?2 ? ?n,?
3C?
2
2
?
=?
? (r)2 ? rk,n
(x).
k
?
n?(B(x, r))
Remember that the above inequality is exactly the condition on the choice of k1 in the theorem
statement. Therefore, suppose k1 ? ?, it must be that k2 > ? otherwise k2 is the highest integer
satisfying the condition, contradicting our choice of k1 . Thus we have (i) ?n,? /k2 < ?n,? /? = 2 .
We also have (ii) rk2 ,n (x) ? 21/d r. To see this, notice that since k1 ? ? < k2 = k1 + 1 we have
k2 ? 2?; now by repeating the sort of argument above, we have ?(B(x, 21/d r)) ? 6?/n which by
Lemma 1 implies that ?n (B(x, 21/d r)) ? 2?/n ? k2 /n.
Now suppose instead that k1 > ?, then by definition of k1 , we have (iii)
rk1 ,n (x)2 ?
?2 ? ?n,?
?2 ? ?n,?
?
= (?)2 .
k1
?
The following holds by (i), (ii), and (iii). Let k be chosen as in the theorem statement. Then, whether
k1 > ? or not, it is true that
2/(2+d)
3C?n,?
?n,?
2
(x) ? 1 + 4?2 2 = 1 + 4?2
+ rk,n
.
k
n?(B(x, r))
Now combine Lemma 3 with equation (2) and we have that with probability at least 1 ? 2? (accounting for all events discussed)
2Cn,? ?n,?
2Cn,?
?n,?
2
2 2
2
2
|fn,k (x) ? f (x)| ?
+ 2? rk,n (x) ?
+ 2?
+ rk,n (x)
?n,? k
?n,?
k
2/(2+d)
2Cn,?
3C?n,?
?
+ 2?2 1 + 4?2
.
?n,?
n?(B(x, r))
5
Final remark
The problem of choosing k = k(x) optimally at x is similar to the problem of local bandwidth
selection for kernel-based methods (see e.g. [16, 17]), and our method for choosing k might yield
insights into bandwidth selection, since k-NN and kernel regression methods only differ in their
notion of neighborhood of a query x.
Acknowledgments
I am grateful to David Balduzzi for many useful discussions.
8
References
[1] C. J. Stone. Optimal rates of convergence for non-parametric estimators. Ann. Statist., 8:1348?
1360, 1980.
[2] C. J. Stone. Optimal global rates of convergence for non-parametric estimators. Ann. Statist.,
10:1340?1353, 1982.
[3] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290, 2000.
[4] J. Tenebaum, V. de Silva, and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290, 2000.
[5] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373?1396, 2003.
[6] P. Bickel and B. Li. Local polynomial regression on unknown manifolds. Tech. Re. Dep. of
Stats. UC Berkley, 2006.
[7] S. Kpotufe. Escaping the curse of dimensionality with a tree-based regressor. Conference On
Learning Theory, 2009.
[8] S. Kpotufe. Fast, smooth, and adaptive regression in metric spaces. Neural Information Processing Systems, 2009.
[9] S. Kulkarni and S. Posner. Rates of convergence of nearest neighbor estimation under arbitrary
sampling. IEEE Transactions on Information Theory, 41, 1995.
[10] L. Gyorfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution Free Theory of Nonparametric
Regression. Springer, New York, NY, 2002.
[11] C. Cutler. A review of the theory and estimation of fractal dimension. Nonlinear Time Series
and Chaos, Vol. I: Dimension Estimation and Models, 1993.
[12] K. Clarkson. Nearest-neighbor searching and metric space dimensions. Nearest-Neighbor
Methods for Learning and Vision: Theory and Practice, 2005.
[13] M. do Carmo. Riemannian Geometry. Birkhauser, 1992.
[14] V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events
to their expectation. Theory of probability and its applications, 16:264?280, 1971.
[15] S. Kpotufe. k-NN regression adapts to local intrinsic dimension. arXiv:1110.4300, 2011.
[16] J. G. Staniswalis. Local bandwidth selection for kernel estimates. Journal of the American
Statistical Association, 84:284?288, 1989.
[17] R. Cao-Abad. Rate of convergence for the wild bootstrap in nonpara- metric regression. Annals
of Statistics, 19:2226?2231, 1991.
9
| 4455 |@word version:2 polynomial:1 seems:1 c0:14 accounting:1 pick:7 harder:1 reduction:4 series:1 chervonenkis:1 must:1 fn:24 happen:1 implying:1 guess:1 hyperplanes:1 mcdiarmid:1 simpler:1 lipchitz:1 unbounded:1 c2:2 consists:2 combine:1 wild:1 x0:4 hardness:2 expected:1 roughly:2 mpg:1 indeed:1 behavior:4 globally:5 decreasing:1 curse:5 increasing:1 becomes:1 provided:3 bounded:10 mass:5 what:1 finding:1 guarantee:3 thorough:1 remember:1 expands:1 act:1 exactly:4 k2:7 control:1 planck:1 positive:3 local:24 approximately:1 might:3 studied:1 suggests:1 range:1 gyorfi:1 practical:1 acknowledgment:1 union:1 practice:1 bootstrap:1 procedure:5 intersect:1 empirical:1 word:4 get:2 cannot:1 close:2 selection:4 risk:1 applying:1 py:3 restriction:1 equivalent:1 map:1 formalized:1 stats:1 estimator:4 insight:1 fill:1 posner:1 embedding:1 handle:1 notion:7 coordinate:1 searching:1 annals:1 target:1 suppose:14 exact:1 homogeneous:14 satisfying:4 capture:2 worst:1 region:6 highest:2 intuition:1 complexity:6 ideally:1 depend:4 tight:1 grateful:1 upon:1 easily:2 various:4 fast:1 query:4 tell:1 neighborhood:16 choosing:6 say:1 otherwise:1 niyogi:1 statistic:1 final:2 neighboring:1 relevant:1 combining:3 cao:1 achieve:3 adapts:2 roweis:1 convergence:9 converges:3 derive:1 illustrate:1 clearer:2 finitely:2 nearest:7 dep:1 implies:2 differ:1 radius:4 vc:7 centered:1 require:2 fix:5 rk1:1 hold:13 lying:1 sufficiently:2 exp:2 vary:4 bickel:1 estimation:3 label:2 establishes:1 weighted:1 hope:2 gaussian:3 rather:5 ck:3 mainly:1 tech:1 attains:1 am:1 nn:14 arg:1 classification:2 constrained:1 uc:1 marginal:5 construct:1 once:1 sampling:1 look:1 nearly:4 filling:1 intelligent:1 escape:2 belkin:1 simultaneously:4 homogeneity:1 replaced:1 geometry:1 lebesgue:1 t2y:9 possibility:1 mixture:1 cutler:1 tree:3 euclidean:2 walk:1 re:1 instance:5 cover:1 subset:6 uniform:3 submanifold:1 eigenmaps:1 optimally:2 considerably:1 density:1 stay:1 regressor:5 again:2 containing:1 choose:5 worse:1 american:1 li:1 de:2 satisfy:4 depends:6 sup:2 start:2 sort:1 cdb:5 contribution:3 square:1 variance:8 yield:2 identify:1 tenebaum:1 monitoring:1 definition:14 ty:8 frequency:1 proof:4 riemannian:1 dataset:1 proved:1 popular:1 dimensionality:4 abad:1 back:1 maximally:4 done:1 shrink:1 wildly:1 furthermore:1 anywhere:1 just:3 langford:1 expressive:1 nonlinear:3 infimum:1 perhaps:1 grows:1 building:1 true:3 y2:6 rk2:1 vicinity:2 equality:1 ll:1 ln2:1 stone:2 silva:1 chaos:1 recently:2 overview:2 exponentially:1 volume:4 discussed:2 tail:2 association:1 refer:1 rd:12 consistency:2 similarly:1 berkley:1 curvature:1 belongs:1 irrelevant:1 inf:1 scenario:1 certain:1 carmo:1 inequality:4 binary:1 yi:8 fen:10 fortunately:1 ey:2 r0:3 converge:1 dashed:1 ii:2 desirable:1 reduces:1 d0:15 smooth:1 plug:1 cross:1 long:2 laplacian:1 controlled:1 ensuring:1 variant:1 regression:30 vision:1 expectation:6 metric:12 arxiv:1 kernel:5 c1:2 want:2 separately:1 grow:2 appropriately:1 integer:2 call:1 near:5 iii:2 enough:1 easy:1 affect:3 xj:3 zi:3 bandwidth:3 escaping:1 cn:6 whether:1 krzyzak:1 clarkson:1 proceed:2 york:1 remark:1 fractal:1 useful:1 generally:1 clear:1 nonparametric:4 repeating:1 locally:7 statist:2 diameter:1 exist:2 notice:1 sign:1 per:1 vol:7 affected:1 express:2 nvb:1 falling:1 changing:1 everywhere:1 throughout:1 reader:1 vb:17 bound:13 ki:1 followed:1 nontrivial:1 infinity:1 calling:1 argument:1 developing:1 alternate:1 ball:10 smaller:1 wi:17 n4:1 happens:1 intuitively:1 ln:23 equation:5 turn:3 know:2 informal:1 apply:2 ensure:1 log2:1 yx:18 restrictive:1 k1:12 balduzzi:1 establish:1 quantity:2 occurs:1 parametric:2 surrogate:1 unclear:1 distance:3 topic:1 manifold:7 tuebingen:1 assuming:2 balance:2 setup:1 unfortunately:2 potentially:1 relate:1 statement:5 stated:1 negative:2 motivates:1 unknown:8 kpotufe:4 upper:4 observation:1 finite:4 situation:1 arbitrary:1 david:1 namely:1 below:4 max:3 reliable:1 including:1 event:6 minimax:12 review:1 geometric:1 l2:1 relative:2 expect:2 ingredient:1 validation:1 sufficient:1 consistent:1 supported:1 last:3 soon:1 free:1 bias:8 institute:1 neighbor:7 wide:1 saul:1 sparse:1 curve:2 dimension:52 made:1 adaptive:2 regressors:3 collection:2 commonly:1 transaction:1 pruning:1 approximate:1 global:7 assumed:1 conclude:2 xi:11 continuous:1 quantifies:1 nature:1 expansion:3 necessarily:1 complex:6 constructing:1 motivation:2 noise:9 n2:1 contradicting:1 allowed:2 ny:1 samory:2 sub:1 exponential:1 candidate:1 tied:2 admissible:5 theorem:15 rk:18 bishop:1 showing:2 jensen:1 maxi:1 r2:1 intrinsic:13 exists:5 vapnik:1 ci:1 depicted:1 simply:1 doubling:22 springer:1 satisfies:3 relies:3 gromov:1 conditional:1 presentation:1 ann:2 lipschitz:4 hard:4 change:3 typical:1 uniformly:9 birkhauser:1 hyperplane:2 lemma:15 called:3 formally:2 support:1 kulkarni:1 kohler:1 |
3,817 | 4,456 | Unifying Non-Maximum Likelihood Learning
Objectives with Minimum KL Contraction
Siwei Lyu
Computer Science Department
University at Albany, State University of New York
[email protected]
Abstract
When used to learn high dimensional parametric probabilistic models, the classical maximum likelihood (ML) learning often suffers from computational intractability, which motivates the active developments of non-ML learning methods. Yet, because of their divergent motivations and forms, the objective functions of many non-ML learning methods are seemingly unrelated, and there lacks
a unified framework to understand them. In this work, based on an information
geometric view of parametric learning, we introduce a general non-ML learning
principle termed as minimum KL contraction, where we seek optimal parameters
that minimizes the contraction of the KL divergence between the two distributions
after they are transformed with a KL contraction operator. We then show that
the objective functions of several important or recently developed non-ML learning methods, including contrastive divergence [12], noise-contrastive estimation
[11], partial likelihood [7], non-local contrastive objectives [31], score matching [14], pseudo-likelihood [3], maximum conditional likelihood [17], maximum
mutual information [2], maximum marginal likelihood [9], and conditional and
marginal composite likelihood [24], can be unified under the minimum KL contraction framework with different choices of the KL contraction operators.
1
Introduction
Fitting parametric probabilistic models to data is a basic task in statistics and machine learning.
Given a set of training data {x(1) , ? ? ? , x(n) }, parameter learning aims to find a member in a
parametric distribution family, q? , to best represent the training data. In practice, many useful
high dimensional parametric probabilistic models, such as Markov random fields [18] or products
of experts
R [12], are defined as q? (x) = q?? (x)/Z(?), where q?? is the unnormalized model and
Z(?) = Rd q?? (x)dx is the partition function. The maximum (log) likelihood (ML) estimation is
the most commonly used
Pnmethod for parameter learning, where the optimal parameter is obtained
by solving argmax? n1 k=1 log q? (x(k) ). The obtained ML estimators has many desirable properties, such as consistency and asymptotic normality [21]. However, because of the high dimensional
integration/summation, the partition function in q? oftentimes makes ML learning computationally
intractable. For this reason, non-ML parameter learning methods that use ?tricks? to obviate direct
computation of the partition function have experienced rapid developments, particularly in recent
years. While many computationally efficient non-ML learning methods have achieved impressive
practical performances, with a few exceptions, their different learning objectives and numerical implementations seem to suggest that they are largely unrelated.
In this work, based on the information geometric view of parametric learning, we elaborate on a general non-ML learning principle termed as minimum KL contraction (MKC), where we seek optimal
parameters that minimize the contraction of the KL divergence between two distributions after they
are transformed with a KL contraction operator. The KL contraction operator is a mapping between
1
probability distributions under which the KL divergence of two distributions tend to reduce unless
they are equal. We then show that the objective functions of a wide range of non-ML learning methods, including contrastive divergence [12], noise-contrastive estimation [11], partial likelihood [7],
non-local contrastive objectives [31], score matching [14], pseudo-likelihood [3], maximum conditional likelihood [17], maximum mutual information [2], maximum marginal likelihood [9], and
conditional and marginal composite likelihood [24], can all be unified under the MKC framework
with different choices of the KL contraction operators and MKC objective functions.
2
Related Works
Similarities in the parameter updates among non-ML learning methods have been noticed in several
recent works. For instance, in [15], it is shown that the parameter update in score matching [14] is
equivalent to the parameter update in a version of contrastive divergence [12] that performs Langevin
approximation instead of Gibbs sampling, and both are approximations to the parameter update of
pseudo-likelihood [3]. This connection is further generalized in [1], which shows that parameter
update in another variant of contrastive divergence is equivalent to a stochastic parameter update
in conditional composite likelihood [24]. However, such similarities in numerical implementations
are only tangential to the more fundamental relationship among the objective functions of different
non-ML learning methods. On the other hand, the energy based learning [22] presents a general
framework that subsume most non-ML learning objectives, but its broad generality also obscures
their specific statistical interpretations.
At the objective function level, relations between some non-ML methods are known. For instance,
it is known that pseudo-likelihood is a special case of conditional composite likelihood [30]. In
[10], several non-ML learning methods are unified under the framework of minimizing Bregman
divergence.
3
KL Contraction Operator
We base our discussion hereafter on continuous variables and probability density functions. Most
results can be readily extended to the discrete case by replacing integrations and probability density
functions with summations and probability mass functions. We denote ?d as the set of all probability
density functions over Rd . For two different probability distributions p, q ? ?d , their KulbackLeibler (KL) divergence (also known as relative entropy or I-divergence) [6] is defined as KL(pkq) =
R
p(x) log p(x)
q(x) dx. KL divergence is non-negative and equals to zero if and only if p = q almost
Rd
everywhere (a.e.). We define a distribution operator, ?, as a mapping between a density function
p ? ?d to another density function p? ? ?d0 , and adopt the shorthand notation p? = ?{p}. A
distribution p is a fix point of a distribution operator ? if p = ?{p}.
A KL contraction operator is a distribution operator, ? :
?d 7? ?d0 , such that for any p, q ? ?d , there exist a
constant ? ? 1 for the following condition to hold:
KL(pkq) ? ?KL(?{p}k?{q}) ? 0.
(1)
p
DKL (p ?q
)
q
p? = ?{p}
q? = ?{q}
Subsequently, ? is known as the contraction factor, and
p?
D
LHS of Eq.(1) is the KL contraction of p and q under ?.
KL
q?
(p?
?q?
Obviously, if p = q (a.e.), their KL contraction, as well
)
as their KL divergence, is zero. In addition, a KL contraction operator is strict if the equality in Eq.(1) holds
only when p = q (a.e.). Intuitively, if the KL divergence Figure 1: Illustration of a KL contraction
operator on two density functions p and q.
is regarded as a ?distance? metric of probability distri1
butions , then it is never increased after both distributions are transformed with a KL contraction
operator, a graphical illustration of which is shown in Fig.1. Furthermore, under a strict KL contraction operator, the KL divergence is always reduced unless the two distributions are equal (a.e.). The
KL contraction operators are analogous to the contraction operators in ordinary metric spaces, with
? having a similar role as the Lipschitz constant [19].
1
Indeed, it is known that the KL divergence behaves like the squared Euclidean distance [6].
2
Eq.(1) gives the general and abstract definition of KL contraction operators. In the following, we
give several examples of KL contraction operators that are constructed from common operations of
probability distributions.
3.1
Conditional Distribution
We can form a family of KL contraction operators using conditional distributions. Consider x ? Rd
0
with distribution p(x) ? ?d and y ? Rd , from a conditional distribution, T (y|x), we can define a
distribution operator, as
Z
?cT {p}(y) =
T (y|x)p(x)dx = p?(y).
(2)
Rd
The following result shows that ?cT is a strict KL contraction operator with ? = 1.
Lemma 1 (Cover & Thomas [6]2 ) For two distributions p, q ? ?d , with the distribution operator
defined in Eq.(2), we have
Z
c
c
KL(pkq) ? KL(?T {p}k?T {q}) =
p?(y)KL(Tp (x|y)kTq (x|y)) dy ? 0,
Rd0
T (y|x)q(x)
q?(y)
T (y|x)p(x)
p(y)
?
where Tp (x|y) =
and Tq (x|y) =
are the induced conditional distributions
from p and q with T . Furthermore, the equality holds if and only if p = q (a.e.).
3.2
Marginalization and Marginal Grafting
Two related types of KL contraction operators can be constructed based on marginal distributions.
Consider x with distribution p(x) ? ?d , and a nonempty index subset A ? {1, ? ? ? , d}. Let \A
denote {1, ? ? ? , d} ? A, the marginal distribution, pA (xA ), of sub-vector xA formed by components
whose indices are in A is obtained by integrating p(x) over sub-vector x\A . This marginalization
operation thus defines a distribution operator
Z between p ? ?d and pA ? ?|A| , as:
?m
A {p}(xA ) =
p(x)dx\A = pA (xA )
(3)
Rd?|A|
Another KL contraction operator termed as marginal grafting can also be defined based on pA . For
a distribution q(x) ? ?d , the marginal grafting operator is defined as:
q(x)pA (xA )
= q\A|A (x\A |xA )pA (xA ),
(4)
?gp,A {q}(x) =
qA (xA )
?gp,A {q} can be understood as replacing qA in q(x) with pA . The last term in Eq.4 is nonnegative
and integrates to one over Rd , and is thus a proper probability distribution in ?d . Furthermore, p is
the fixed point of operator ?gp,A , as ?gp,A {p} = p.
mg
The following result shows that both ?m
A and ?p,A are KL contraction operators, and that they are
in a sense complementary to each other.
Lemma 2 (Huber [13]) For two distributions p, q ? ?d , with the distribution operators defined in
Eq.(3) and Eq.(4), we have
m
KL(pkq) ? KL ?gp,A {p}
?gp,A {q} = KL(?m
A {p}k?A {q}) .
Furthermore,
Z
KL ?g {p}
?g {q} =
pA (xA )KL p\A|A (?|xA )
q\A|A (?|xA ) dxA ,
p,A
p,A
Rd
where p\A|A (?|xA ) and q\A|A (?|xA ) are the conditional distributions induced from p(x) and q(x),
and
m
KL(?m
A {p}k?A {q}) = KL(pA (xA )kqA (xA )) .
mg
Lemma 2 also indicates that neither ?m
A nor ?p,A is strict - the KL contraction of p(x) and q(x) for
the former is zero if p\A|A (x\A |xA ) = q\A|A (x\A |xA ) (a.e.), even though they may differ on the
marginal distribution over xA . And for the latter, having pA (xA ) = qA (xA ) is sufficient to make
their KL contraction zero.
2
We cite the original reference to this and subsequent results, which are recast using the terminology introduced in this work. Due to the limit of space, we defer formal proofs of these results to the supplementary
materials.
3
3.3
Binary Mixture
For two different distributions p(x) and g(x) ? ?d , we introduce a binary variable c ? {0, 1} and
P r(c = i) = ?i , with ?0 , ?1 ? [0, 1] and ?0 + ?1 = 1. We can then form a joint distribution
p?(x, c = 0) = ?0 g(x) and p?(x, c = 1) = ?1 p(x) over Rd ? {0, 1}. Marginalizing out c from
p?(x, c), we obtain a binary mixture p?(x), which induces a distribution operator:
?bg {p}(x) = ?0 g(x) + ?1 p(x) = p?(x).
The following result shows that
?bg
(5)
is a strict KL contraction operator with ? = 1/?1 .
Lemma 3 For two distributions p, q ? ?d , with the distribution operator defined in Eq.(5), we have
Z
1
1
KL(pkq) ? KL ?bg {p}
?bg {q} =
p?(x) KL pc|x (c|x)
qc|x (c|x) dx ? 0,
?1
?1 Rd
where p(c|x) and q(c|x) are the induced posterior conditional distributions from p?(x, c) and q?(x, c),
respectively. The equality holds if and only if p = q (a.e.).
3.4
Lumping
Sm
Let S = (S1 , S2 , ? ? ? , Sm ) be a partition of Rd such that Si ? Sj = ? for i 6= j, and i=1 Si = Rd ,
the lumping [8] of a distribution p(x) ? ?d over S yields a distribution over i ? {1, 2, ? ? ? , m}, and
subsequently induces a distribution operator ?lS , as:
Z
?lS {p}(i) =
p(x)dx = PiS , for i = 1, ? ? ? , m.
(6)
x?Si
The following result shows that ?lS is a KL contraction operator with ? = 1.
Lemma 4 (Csisz`ar & Shields [8]) For two distributions p, q ? ?d , with the distribution operator
defined in Eq.(6), we have
m
X
PiS KL(?
pi k?
qi) ? 0,
KL(pkq) ? KL ?lS {p}
?lS {p} =
where p?i (x) =
p(x)?1[x?S ]
i
R
p(x0 )dx0
x0 ?S
and q?i (x) =
i
i=1
q(x)?1[x?S ]
i
R
are
q(x0 )dx0
x0 ?S
the distributions induced from p and
i
q by restricting to Si , respectively, with 1[?] being the indicator function.
Note that ?lS is in general not strict, as two distributions agree over all p?i and q?i will have a zero KL
contraction.
4
Minimizing KL Contraction for Parametric Learning
In this work, we take the information geometric view of parameter learning - assuming training
data are samples from a distribution p ? ?d , we seek an optimal distribution on the statistical
manifold corresponding to the parametric distribution family q? that best approximates p [20]. In this
context, the maximum (log) likelihood learning is equivalent to finding Rparameter ? that minimizes
the KL divergence of p and q? [8], as argmin? KL(pkq?) = argmax? Rd p(x) log q? (x)dx. The
data
obtained when we approximate the expectation with sample average as
R based ML objective is
P
n
1
(k)
p(x)
log
q
(x)dx
?
).
?
k=1 log q? (x
n
Rd
The KL contraction operators suggest an alternative approach for parametric learning. In particular,
the KL contraction of p and q? under a KL contraction operator is always nonnegative and reaches
zero when p and q? are equal almost everywhere. Therefore, we can minimize their KL contraction under a KL contraction operator to encourage the matching of q? to p. We term this general
approach of parameter learning as minimum KL contraction (MKC). Mathematically, minimum KL
contraction may be realized with three different but related types of objective functions.
- Type I: With a KL contraction operator ?, we can find optimal ? that directly minimizes the KL
contraction between p and q? , as:
argmin KL(pkq?) ? ?KL(?{p}k?{q? }) .
(7)
?
In practice, it may be desirable to use ? with ? = 1 that is also a linear operator for L2 bounded
? (x)
as the model defined with
functions over Rd [19]. To better see this, consider q? (x) = q?Z(?)
4
the unnormalized model and its partition function. Furthermore, assuming that we can obtain
samples {x1 , ? ? ? , xn } and {y1 , ? ? ? , yn0 } from p and ?{p}, respectively, the optimization of
Eq.(7) can be approximated as
n
n0
1X
1 X
(k)
argmin KL(pkq?)?KL(?{p}k?{q? }) ? argmax
log q?? (x )? 0
log ?{?
q? }(y(k) ),
n
n
?
?
k=1
k=1
where due to the linearity of ?, the two terms of Z(?) in q? and L{q? } cancel out each other.
Therefore, the optimization does not require the computation of the partition function, a highly
desirable property for fitting parameters in high dimensional probabilistic models with intractable
partition functions. Type I MKC objective functions with KL contraction operators induced from
conditional distribution, marginalization, marginal grafting, linear transform, and lumping all fall
into this category. However, using nonlinear KL contraction operators, such as the one induced
from binary mixtures, may also be able to avoid computing the partition function (e.g., Section
4.4). Furthermore, the KL contraction operator in Eq.(7) can have parameters, which can include
the model parameter ? (e.g., Section 4.2). However, the optimization becomes more complicated
as ?{p} cannot be ignored when optimizing ?. Last, note that when using Type I MKC objective
functions with a non-strict KL contraction operator, we cannot guarantee p = q? even if their
corresponding KL contraction is zero.
- Type II: Consider a strict KL contraction operator with ? = 1, denoted as ?t , is parameterized
by an auxiliary parameter t that is different from ?, and for any distribution p ? ?d , we have
?0 {p} = p and ?t {p} is continuously differentiable with regards to t. Then, the KL divergence
?t {p} and ?t {q? } can be regarded as a function of t and ?, as: L(t, ?) = KL(?t {p}k?t {q?).
Thus, the KL contraction in Eq.(7) can be approximated with a Taylor expansion near t = 0, as
KL(pkq?) ? KL(??t {p}k?
?t {q? }) = KL(?0 {p}k?0 {q? }) ? KL(??t {p}k??t {q? }) = L(0, ?) ?
?L(t,?)
?
L(?t, ?) ? ??t ?t
= ??t ?t
KL(?t {p}k?t {q? })t=0 . If the derivative of KL contract=0
tion with regards to t is easier to work with than the KL contraction itself (e.g., Section 4.5),
we can fix ?t and equivalently maximizing the derivative, which is the Type II MKC objective
function, as
?
argmax KL(?t {p}k?t {q? })
.
(8)
?t
?
t=0
- Type III: In the case where we have access to a set of different KL contraction operators,
{?1 , ? ? ? , ?m }, we can implement the minimum KL contraction principle by finding optimal
? that minimizes their average KL contraction, as
m
1X
(KL(pkq?) ? ?i KL(?i {p}k?i {q? })) .
(9)
argmin
m i=1
?
As each KL contraction in the sum is nonnegative, Eq.(9) is zero if and only if each KL contraction is zero. If the consistency of p and q? with regards to ?i corresponds to certain constraints
on q? , the objective function, Eq.(9), represents the consistency of all such constraints. Under
some special cases, minimizing Eq.(9) to zero over a sufficient number of certain types of KL
contraction operators may indeed ensure equality of p and q? (e.g., Section 4.6).
4.1
Fitting Gaussian Model with KL Contraction Operator from a Gaussian Distribution
We first describe an instance of MKC learning under a very simple setting, where we approximate
a distribution p(x) for x ? R with known mean ?0 and variance ?02 , with a Gaussian model q?
whose mean and variance are the parameters to be estimated as ? = (?, ? 2 ). Using the strict KL
contraction operator ?cT constructed with a Gaussian conditional distribution
(y ? x)2
1
exp ?
,
T (y|x) = p
2?12
2??12
with known variance ?12 , we form the Type I MKC objective function. In this simple case, Eq.(7) is
reduced to a closed form objective function, as:
?02
?02 + ?12
1
?2
?12 (? ? ?0 )2
argmin
?
+ log 2
+ 2 2
,
2? 2
2(? 2 + ?12 ) 2
? + ?12
2? (? + ?12 )
?,? 2
whose optimal solution, ? = ?0 and ? 2 = ?02 , is obtained by direct differentiation. The detailed
derivation of this result is omitted due to the limit of space. Note that, the optimal parameters do
not rely on the parameter in the KL contraction operator (in this case, ?12 ), and are the same as those
obtained by minimizing the KL divergence between p and q? , or equivalently, maximizing the log
likelihood, when samples from p(x) are used to approximate the expectation.
5
4.2
Relation with Contrastive Divergence [12]
Next, we consider the general strict KL contraction operator ?cT? constructed from a conditional
distribution,
T? (y|x), for x, y ? Rd , of which the parametric model q? is a fixed point, as q? (y) =
R
T (y|x)q? (x)dx = ?cT? {q? }(y). In other words, q? is the equilibrium distribution of the
Rd ?
Markov chain whose transitional distribution is given by T? (y|x). The Type I objective function
of minimum KL contraction, Eq.(7), for p, q? ? ?d under ?cT? is
argmin KL(pkq?) ? KL ?cT? {p}
?cT? {q? } = argmin KL(pkq?) ? KL(p? kq?) ,
?
?
?cT? {p}.
where p? is the shorthand notation for
Note that this is the objective function of the contrastive divergence learning [12]. However, the dependency of p? on ? makes this objective function
difficult to optimize. By ignoring this dependency, the practical parameter update in contrastive
divergence only approximately follows the gradient of this objective function [5].
4.3
Relation with Partial Likelihood [7] and Non-local Contrastive Objectives [31]
Next, we consider the Type I MKC objective function, Eq.(7), combined with the KL contraction
operator constructed from lumping. Using Lemma 4, we have
m
X
PiS KL p?i
q?i?
argmin KL(pkq?) ? KL ?lS {p}
?lS {q? } = argmin
?
?
=
argmax
?
m
X
i=1
i=1
m
n
X
1X
p?i (x) log q?i? (x)dx ? argmax
PiS log q?i? (x(k) ),
1[x(k) ?Si ]
n
?
x?Si
i=1
Z
PiS
k=1
where {x(1) , ? ? ? , x(n) } are samples from p(x). Minimizing KL contraction in this case is equivalent to maximizing the weighted sum of log likelihood of the probability distributions formed by
restricting the overall model to subsets of state space. The last step resembles the partial likelihood
objective function [7], which is recently rediscovered in the context of discriminative learning as
non-local contrastive objectives [31]. In [31], the partitions are required to overlap with each other,
while the above result shows that non-overlapping partitions of Rd can also be used for non-ML
parameter learning.
4.4
Relation with Noise Contrastive Estimation [11]
Next, we consider the Type I MKC objective function, Eq.(7), combined with the strict KL contraction operator constructed from the binary mixture operation (Lemma 3). In particular, we simplify
Eq.(7) using the definition of ?bg , as:
1
argmin KL(pkq?) ? KL ?bg {p}
?bg {q? }
?1
?
Z
Z
1
= argmin
(?0 g(x) + ?1 p(x)) log (?0 g(x) + ?1 q? (x)) dx ?
p(x) log q? (x)dx
?1 R d
?
Rd
Z
Z
?1 q? (x)
?0
?0 g(x)
= argmax
p(x) log
dx +
g(x) log
dx.
?
g(x)
+
?
q
(x)
?
?
g(x)
+ ?1 q? (x)
d
d
0
1 ?
1 R
0
?
R
When the expectations in the above objective function are approximated with averages over samples
from p(x) and g(x), {x(1) , ? ? ? , x(n+ ) } and {y(1) , ? ? ? , y(n? ) }, the Type I MKC objective function
in this case reduces to
argmax
?
n+
n?
1 X
?1 q? (x(k) )
?0 1 X
?0 g(y(k) )
+
.
log
log
(k)
(k)
(k)
n+
?0 g(x ) + ?1 q? (x ) ?1 n? k=1
?0 g(y ) + ?1 q? (y(k) )
k=1
If we set ?0 = ?1 = 1/2, and treat {x(1) , ? ? ? , x(n+ ) } and {y(1) , ? ? ? , y(n? ) } as data of interest and
noise, respectively, the above objective function can also be interpreted as minimizing the Bayesian
classification error of data and noise, which is the objective function of noise-contrastive estimation
[11].
6
4.5
Relation with Score Matching [14]
Next, we consider the strict KL contraction operator, ?cTt , constructed from an isotropic Gaussian
conditional distribution with a time decaying variance (i.e., a Gaussian diffusion process):
1
ky ? xk2
Tt (y|x) =
exp ?
,
2t
(2?t)d/2
where t ? [0, ?) is the continuous temporal index. Note that we have ?cT0 {p} = p for any p ? ?d .
If both p(x) and q? (x) are functions differentiable with regards to x, it is know that the temporal
derivative of the KL contraction of p and q? under ?cTt is in closed form, which is formally stated in
the following result.
Lemma 5 (Lyu [25]) For any two distributions p, q ? ?d differentiable with regards to x, we have
Z
?x ?cTt p(x) ?x ?cTt q? (x)
2
c
d
1
c
c
dx,
(10)
KL ?Tt {p} ?Tt {q? } = ?
? {p}(x)
?
dt
2 R d Tt
?cTt p(x)
?cTt q? (x)
where ?x is the gradient operator with regards to x.
Setting t = 0 in Eq.(10), we obtain a closed form for the Type II MKC objective function, Eq.(8),
which can be further simplified [14], as
Z
?x p(x) ?x q? (x)
2
d
dx
= argmin
p(x)
?
argmax KL(?t {p}k?t {q? })
dt
p(x)
q? (x)
?
?
Rd
t=0
Z
2
= argmin
p(x) k?x log q? (x)k + 24x log q? (x) dx
?
?
argmin
?
Rd
n
X
1
n
2
?x log q? (x(k) )
+ 24x log q? (x(k) ) ,
k=1
where {x(1) , ? ? ? , x(n) } are samples from p(x), and 4x is the Laplacian operator with regards to x.
The last step is the objective function of score matching learning [14].
4.6
Relation with Conditional Composite Likelihood [24] and Pseudo-Likelihood [3]
Next, we consider the Type I MKC objective function, Eq.(7), combined with the KL
contraction operator, ?m
A , constructed from marginalization.
R According to Lemma 2, we
m
{q})
=
argmax
{p}k?
have argmin? KL(pkq) ? KL(?m
? Rd p(x) log q\A|A (x\A |xA )dx ?
A
A
Pn
(k) (k)
1
argmax? n k=1 log q\A|A (x\A |xA ), where in the last step, expectation over p(x) is replaced
with averages over samples from p(x), {x(1) , ? ? ? , x(n) }. This corresponds to the objective function
in maximum conditional likelihood [17] or maximum mutual information [2], which are non-ML
learning objectives for discriminative learning of high dimensional probabilistic data models.
m
However, Lemma 2 also shows that KL(pkq) ? KL(?m
A {p}k?A {q}) = 0 is not sufficient to guarantee p = q? . Alternatively, we can use the Type III MKC objective function, Eq.(9), to combine KL
contraction operators formed from marginalizations over m different index subsets A1 , ? ? ? , Am :
m
m
n
m
1 X
1 X1X
(k) (k)
argmin KL(pkq) ?
KL ?m
log qAi |\Ai (xAi |x\Ai ).
Ai {p} ?Ai {q} ? argmax
m
m
n
?
?
i=1
i=1
k=1
This is the objective function in conditional composite likelihood [24, 30, 23, 1] (also rediscovered
under the name piecewise learning in [26]).
A special case of conditional composite likelihood is when Ai = \{i}, the resulting marginalth
ization operator, ?m
\{i} , is known as the i singleton marginalization operator. With the d different singleton
operators, we can rewrite the objective function as KL(pkq) ?
marginalization
Pd
Pd R
1
1
m
m
= d i=1 R pi (xi )KL pi|\i (xi |x\i )
qi|\i (xi |x\i ) dxi . Note that in
i=1 KL ?\i p
?\i q
d
this case, the average KL contraction is zero if and only if p(x) and q? (x) agree on every singleton
conditional distribution, i.e., pi|\i (xi |x\i ) = qi|\i (xi |x\i ) for all i and x. According the Brook?s
Lemma [4], the latter condition is sufficient for p(x) = q? (x) (a.e.). Furthermore, when approximating the expectations with averages over samples from p(x), we have
7
d
n
d
1X1X
1X
m
(k) (k)
m
argmin KL(pkq) ?
KL ?\{i} p
?\{i} q ? argmax
log qi|\i (xi |x\i ),
d
n
d
?
?
i=1
i=1
k=1
which is objective function in maximum pseudo-likelihood learning [3, 29].
4.7
Relation with Marginal Composite Likelihood
We now consider combining Type III MKC objective function, Eq.(9), with the KL contraction operator constructed from the marginal grafting operation. Specifically, with m different KL contraction
operators constructed from marginal grafting on index subsets A1 , ? ? ? , Am , using Lemma 2, we can
expand the corresponding Type III minimum KL contraction objective function as:
m
m
1 X
1 X
argmin KL(pkq) ?
KL ?gp,Ai {p}
?gp,Ai {q} = argmin
KL(pAi (xAi )kqAi (xAi ))
m i=1
m i=1
?
?
m Z
n
m
1 X
1X 1 X
(k)
= argmax
pAi (xAi ) log qAi (xAi )dxAi ? argmax
log qAi (xAi )
m
n
m
d
?
?
i=1 R
i=1
k=1
The last step, which maximizes the log likelihood of a set of marginal distributions on training data,
corresponds to the objective function of marginal composite likelihood [30]. With m = 1, the
resulting objective is used in the maximum marginal likelihood or Type-II likelihood learning [9].
5
Discussions
In this work, based on an information geometric view of parameter learning, we have described
minimum KL contraction as a unifying principle for non-ML parameter learning, showing that the
objective functions of several existing non-ML parameter learning methods can all be understood as
instantiations of this principle with different KL contraction operators.
There are several directions that we would like to extend the current work. First, the proposed minimum KL contraction framework may be further generalized using the more general f -divergence
[8], of which the KL divergence is a special case. With the more general framework, we hope to
reveal further relations among other types of non-ML learning objectives [16, 25, 28, 27]. Second, in
the current work, we have focused on the idealization of parametric learning as matching probability
distributions. In practice, learning is most often performed on finite data set with an unknown underlying distribution. In such cases, asymptotic properties of the estimation as data volume increases,
such as consistency, become essential. While many non-ML learning methods covered in this work
have been shown to be consistent individually, the unification based on the minimum KL contraction may provide a general condition for such asymptotic properties. Last, understanding different
existing non-ML learning objectives through minimizing KL contraction also provides a principled
approach to devise new non-ML learning methods, by seeking new KL contraction operators, or
new combinations of existing KL contraction operators.
Acknowledgement The author would like to thank Jascha Sohl-Dickstein, Michael DeWeese and
Michael Gutmann for helpful discussions on an early version of this work. This work is supported
by the National Science Foundation under the CAREER Award Grant No. 0953373.
References
[1] Arthur U. Asuncion, Qiang Liu, Alexander T. Ihler, and Padhraic Smyth. Learning with blocks:
Composite likelihood and contrastive divergence. In AISTATS, 2010. 2, 7
[2] L. Bahl, P. Brown, P. de Souza, and R. Mercer. Maximum mutual information estimation of
hidden markov model parameters for speech recognition. In ICASSP, 1986. 1, 2, 7
[3] J. Besag. Statistical analysis of non-lattice data. The Statistician, 24:179?95, 1975. 1, 2, 7, 8
[4] D. Brook. On the distinction between the conditional probability and the joint probability
approaches in the specification of nearest-neighbor systems. Biometrika, 3/4(51):481?483,
1964. 7
8
? Carreira-Perpi?na? n and G. E. Hinton. On contrastive divergence learning. In AISTATS,
[5] M. A.
2005. 6
[6] T. Cover and J. Thomas. Elements of Information Theory. Wiley-Interscience, 2nd edition,
2006. 2, 3
[7] D. R. Cox. Partial likelihood. Biometrika, 62(2):pp. 269?276, 1975. 1, 2, 6
[8] I. Csisz?ar and P. C. Shields. Information theory and statistics: A tutorial. Foundations and
Trends in Communications and Information Theory, 1(4):417?528, 2004. 4, 8
[9] I.J. Good. The Estimation of Probabilities: An Essay on Modern Bayesian Methods. MIT
Press, 1965. 1, 2, 8
[10] M. Gutmann and J. Hirayama. Bregman divergence as general framework to estimate unnormalized statistical models. In Conference on Uncertainty in Artificial Intelligence (UAI),
Barcelona, Spain, 2011. 2
[11] M. Gutmann and A. Hyv?arinen. Noise-contrastive estimation: A new estimation principle for
unnormalized statistical models. In AISTATS, 2010. 1, 2, 6
[12] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural
Computation, 14:1771?1800, 2002. 1, 2, 6
[13] P. J. Huber. Projection pursuit. The Annuals of Statistics, 13(2):435?475, 1985. 3
[14] A. Hyv?arinen. Estimation of non-normalized statistical models using score matching. Journal
of Machine Learning Research, 6:695?709, 2005. 1, 2, 7
[15] A. Hyv?arinen. Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. IEEE Transactions on Neural Networks, 18(5):1529?
1531, 2007. 2
[16] A. Hyv?arinen. Some extensions of score matching. Computational Statistics & Data Analysis,
51:2499?2512, 2007. 8
[17] T. Jebara and A. Pentland. Maximum conditional likelihood via bound maximization and the
CEM algorithm. In NIPS, 1998. 1, 2, 7
[18] J. Laurie Kindermann, Ross; Snell. Markov Random Fields and Their Applications. American
Mathematical Society, 1980. 1
[19] E. Kreyszig. Introductory Functional Analysis with Applications. Wiley, 1989. 2, 4
[20] Stefan L. Lauritzen. Statistical manifolds. In Differential Geometry in Statistical Inference,
pages 163?216, 1987. 4
[21] Lucien Le Cam. Maximum likelihood ? an introduction. ISI Review, 58(2):153?171, 1990. 1
[22] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. Huang. Tutorial on energy-based learning.
In Predicting Structured Data. MIT Press, 2006. 2
[23] P. Liang and M. I Jordan. An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators. In International Conference on Machine Learning, 2008. 7
[24] B. G Lindsay. Composite likelihood methods. Contemporary Mathematics, 80(1):22?39, 1988.
1, 2, 7
[25] S. Lyu. Interpretation and generalization of score matching. In UAI, 2009. 7, 8
[26] A. McCallum, C. Pal, G. Druck, and X. Wang. Multi-conditional learning: Generative/discriminative training for clustering and classification. In Association for the Advancement of Artificial Intelligence (AAAI), 2006. 7
[27] M. Pihlaja, M. Gutmann, and A. Hyv?arinen. A family of computationally efficient and simple
estimators for unnormalized statistical models. In UAI, 2010. 8
[28] J. Sohl-Dickstein, P. Battaglino, and M. DeWeese. Minimum probability flow learning. In
ICML, 2011. 8
[29] D. Strauss and M. Ikeda. Pseudolikelihood estimation for social networks. Journal of the
American Statistical Association, 85:204?212, 1990. 8
[30] C. Varin and P. Vidoni. A note on composite likelihood inference and model selection.
Biometrika, 92(3):519?528, 2005. 2, 7, 8
[31] D. Vickrey, C. Lin, and D. Koller. Non-local contrastive objectives. In ICML, 2010. 1, 2, 6
9
| 4456 |@word cox:1 version:2 nd:1 hyv:5 essay:1 seek:3 contraction:81 contrastive:21 liu:1 score:9 hereafter:1 existing:3 current:2 si:6 yet:1 dx:18 readily:1 ikeda:1 subsequent:1 numerical:2 partition:10 update:7 n0:1 intelligence:2 generative:2 advancement:1 isotropic:1 mccallum:1 provides:1 mathematical:1 constructed:10 direct:2 become:1 differential:1 shorthand:2 fitting:3 combine:1 interscience:1 introductory:1 introduce:2 x0:4 huber:2 indeed:2 rapid:1 isi:1 nor:1 multi:1 becomes:1 spain:1 unrelated:2 notation:2 bounded:1 mass:1 linearity:1 maximizes:1 underlying:1 argmin:19 interpreted:1 minimizes:4 developed:1 unified:4 finding:2 differentiation:1 lsw:1 guarantee:2 pseudo:6 temporal:2 every:1 biometrika:3 grant:1 understood:2 local:5 treat:1 limit:2 approximately:1 resembles:1 range:1 practical:2 lecun:1 practice:3 block:1 implement:1 composite:12 matching:11 projection:1 word:1 integrating:1 suggest:2 cannot:2 selection:1 operator:66 context:2 optimize:1 equivalent:4 maximizing:3 l:8 focused:1 hadsell:1 qc:1 lumping:4 jascha:1 estimator:3 regarded:2 obviate:1 analogous:1 qai:3 lindsay:1 smyth:1 trick:1 pa:10 element:1 approximated:3 particularly:1 recognition:1 trend:1 role:1 wang:1 gutmann:4 ranzato:1 transitional:1 contemporary:1 principled:1 pd:2 cam:1 solving:1 rewrite:1 icassp:1 joint:2 derivation:1 describe:1 artificial:2 varin:1 whose:4 supplementary:1 valued:1 statistic:4 gp:8 transform:1 itself:1 seemingly:1 obviously:1 mg:2 differentiable:3 product:2 combining:1 csisz:2 ky:1 hirayama:1 nearest:1 lauritzen:1 eq:25 auxiliary:1 c:1 differ:1 direction:1 stochastic:1 subsequently:2 material:1 require:1 arinen:5 fix:2 generalization:1 snell:1 summation:2 mathematically:1 extension:1 hold:4 exp:2 equilibrium:1 lyu:3 mapping:2 adopt:1 early:1 omitted:1 xk2:1 estimation:12 albany:2 integrates:1 lucien:1 kindermann:1 ross:1 individually:1 weighted:1 hope:1 stefan:1 mit:2 always:2 gaussian:6 aim:1 avoid:1 pn:1 obscures:1 likelihood:39 indicates:1 besag:1 sense:1 am:2 helpful:1 inference:2 hidden:1 relation:8 koller:1 expand:1 transformed:3 overall:1 among:3 classification:2 denoted:1 development:2 integration:2 special:4 mutual:4 marginal:17 equal:4 field:2 never:1 having:2 sampling:1 qiang:1 represents:1 broad:1 pai:2 cancel:1 icml:2 simplify:1 piecewise:1 few:1 tangential:1 modern:1 divergence:28 national:1 replaced:1 argmax:15 geometry:1 statistician:1 n1:1 tq:1 interest:1 highly:1 rediscovered:2 mixture:4 pc:1 chain:1 bregman:2 encourage:1 partial:5 unification:1 arthur:1 lh:1 unless:2 euclidean:1 taylor:1 instance:3 increased:1 cover:2 tp:2 ar:2 lattice:1 ordinary:1 maximization:1 subset:4 kq:1 pal:1 dependency:2 combined:3 density:6 fundamental:1 international:1 probabilistic:5 contract:1 michael:2 continuously:1 druck:1 na:1 squared:1 aaai:1 padhraic:1 huang:1 expert:2 derivative:3 american:2 singleton:3 de:1 bg:7 tion:1 view:4 performed:1 closed:3 decaying:1 complicated:1 asuncion:1 defer:1 minimize:2 formed:3 variance:4 largely:1 yield:1 bayesian:2 siwei:1 reach:1 suffers:1 definition:2 ktq:1 energy:2 pp:1 proof:1 dxi:1 ihler:1 dt:2 though:1 generality:1 furthermore:7 xa:22 hand:1 replacing:2 nonlinear:1 overlapping:1 lack:1 defines:1 bahl:1 reveal:1 name:1 brown:1 normalized:1 ization:1 former:1 equality:4 vickrey:1 unnormalized:5 generalized:2 butions:1 tt:4 performs:1 recently:2 common:1 behaves:1 functional:1 volume:1 extend:1 interpretation:2 rd0:1 approximates:1 association:2 gibbs:1 ai:7 rd:23 consistency:4 mathematics:1 access:1 specification:1 impressive:1 similarity:2 ct0:1 base:1 pkq:21 posterior:1 recent:2 optimizing:1 ctt:6 termed:3 certain:2 binary:5 devise:1 minimum:13 ii:4 desirable:3 reduces:1 d0:2 lin:1 award:1 dkl:1 a1:2 laplacian:1 qi:4 variant:1 basic:1 metric:2 expectation:5 represent:1 achieved:1 addition:1 strict:12 induced:6 tend:1 member:1 flow:1 seem:1 jordan:1 near:1 chopra:1 iii:4 marginalization:7 reduce:1 speech:1 york:1 ignored:1 useful:1 detailed:1 covered:1 induces:2 category:1 reduced:2 exist:1 tutorial:2 estimated:1 discrete:1 dickstein:2 terminology:1 neither:1 deweese:2 diffusion:1 year:1 sum:2 idealization:1 everywhere:2 parameterized:1 uncertainty:1 family:4 almost:2 dy:1 bound:1 ct:9 nonnegative:3 annual:1 constraint:2 department:1 structured:1 according:2 combination:1 dxa:1 s1:1 intuitively:1 computationally:3 agree:2 nonempty:1 know:1 pursuit:1 operation:4 alternative:1 thomas:2 original:1 clustering:1 include:1 ensure:1 graphical:1 unifying:2 approximating:1 classical:1 society:1 seeking:1 objective:49 noticed:1 x1x:2 realized:1 parametric:11 gradient:2 distance:2 thank:1 manifold:2 reason:1 assuming:2 index:5 relationship:1 illustration:2 minimizing:8 equivalently:2 difficult:1 liang:1 negative:1 stated:1 implementation:2 motivates:1 proper:1 unknown:1 markov:4 sm:2 finite:1 pentland:1 langevin:1 subsume:1 extended:1 hinton:2 communication:1 y1:1 jebara:1 souza:1 introduced:1 required:1 kl:150 connection:2 distinction:1 yn0:1 barcelona:1 nip:1 qa:3 brook:2 able:1 recast:1 including:2 overlap:1 rely:1 predicting:1 indicator:1 normality:1 review:1 geometric:4 l2:1 understanding:1 acknowledgement:1 marginalizing:1 asymptotic:4 relative:1 foundation:2 sufficient:4 consistent:1 mercer:1 principle:6 intractability:1 pi:9 supported:1 last:7 formal:1 pseudolikelihood:3 understand:1 wide:1 fall:1 neighbor:1 regard:7 xn:1 author:1 commonly:1 simplified:1 oftentimes:1 social:1 transaction:1 sj:1 approximate:3 grafting:6 ml:26 cem:1 active:1 xai:6 instantiation:1 uai:3 discriminative:4 xi:6 alternatively:1 continuous:3 learn:1 career:1 ignoring:1 expansion:1 laurie:1 aistats:3 motivation:1 noise:7 s2:1 edition:1 complementary:1 x1:1 fig:1 elaborate:1 wiley:2 shield:2 experienced:1 sub:2 perpi:1 specific:1 showing:1 divergent:1 intractable:2 essential:1 restricting:2 sohl:2 strauss:1 easier:1 entropy:1 cite:1 corresponds:3 conditional:24 lipschitz:1 carreira:1 specifically:1 lemma:12 exception:1 formally:1 latter:2 dx0:2 alexander:1 |
3,818 | 4,457 | Shaping Level Sets with Submodular Functions
Francis Bach
INRIA - Sierra Project-team
Laboratoire d?Informatique de l?Ecole Normale Sup?erieure, Paris, France
[email protected]
Abstract
We consider a class of sparsity-inducing regularization terms based on submodular functions. While previous work has focused on non-decreasing functions, we explore symmetric submodular functions and their Lov?asz extensions. We show that the Lov?asz
extension may be seen as the convex envelope of a function that depends on level sets
(i.e., the set of indices whose corresponding components of the underlying predictor are
greater than a given constant): this leads to a class of convex structured regularization
terms that impose prior knowledge on the level sets, and not only on the supports of the
underlying predictors. We provide unified optimization algorithms, such as proximal
operators, and theoretical guarantees (allowed level sets and recovery conditions). By
selecting specific submodular functions, we give a new interpretation to known norms,
such as the total variation; we also define new norms, in particular ones that are based
on order statistics with application to clustering and outlier detection, and on noisy cuts
in graphs with application to change point detection in the presence of outliers.
1 Introduction
The concept of parsimony is central in many scientific domains. In the context of statistics, signal
processing or machine learning, it may take several forms. Classically, in a variable or feature
selection problem, a sparse solution with many zeros is sought so that the model is either more
interpretable, cheaper to use, or simply matches available prior knowledge (see, e.g., [1, 2, 3] and
references therein). In this paper, we instead consider sparsity-inducing regularization terms that
will lead to solutions with many equal values. A classical example is the total variation in one or
two dimensions, which leads to piecewise constant solutions [4, 5] and can be applied to various
image labelling problems [6, 5], or change point detection tasks [7, 8, 9]. Another example is the
?Oscar? penalty which induces automatic grouping of the features [10]. In this paper, we follow
the approach of [3], who designed sparsity-inducing norms based on non-decreasing submodular
functions, as a convex approximation to imposing a specific prior on the supports of the predictors.
Here, we show that a similar parallel holds for some other class of submodular functions, namely
non-negative set-functions which are equal to zero for the full and empty set. Our main instance of
such functions are symmetric submodular functions.
We make the following contributions:
? We provide in Section 3 explicit links between priors on level sets and certain submodular
functions: we show that the Lov?asz extensions (see, e.g., [11] and a short review in Section 2)
associated to these submodular functions are the convex envelopes (i.e., tightest convex lower
bounds) of specific functions that depend on all level sets of the underlying vector.
? In Section 4, we reinterpret existing norms such as the total variation and design new norms,
based on noisy cuts or order statistics. We propose applications to clustering and outlier detection, as well as to change point detection in the presence of outliers.
? We provide unified algorithms in Section 5, such as proximal operators, which are based on a
sequence of submodular function minimizations (SFMs), when such SFMs are efficient, or by
adapting the generic slower approach of [3] otherwise.
? We derive unified theoretical guarantees for level set recovery in Section 6, showing that even
in the absence of correlation between predictors, level set recovery is not always guaranteed,
a situation which is to be contrasted with traditional support recovery situations [1, 3].
1
Notation. For w ? Rp and q ? [1, ?], we denote by kwkq the ?q -norm of w. Given a subset A
of V = {1, . . . , p}, 1A ? {0, 1}p is the indicator vector of the subset A. Moreover, given a vector
w and a matrix Q, wA and QAA denote P
the corresponding subvector and submatrix of w and Q.
Finally, for w ? Rp and A ? V , w(A) = k?A wk = w? 1A (this defines a modular set-function).
In this paper, for a certain vector w ? Rp , we call level sets the sets of indices which are larger (or
smaller) or equal to a certain constant ?, which we denote {w > ?} (or {w 6 ?}), while we call
constant sets the sets of indices which are equal to a constant ?, which we denote {w = ?}.
2 Review of Submodular Analysis
In this section, we review relevant results from submodular analysis. For more details, see, e.g., [12],
and, for a review with proofs derived from classical convex analysis, see, e.g., [11].
Definition. Throughout this paper, we consider a submodular function F defined on the power set
2V of V = {1, . . . , p}, i.e., such that ?A, B ? V, F (A) + F (B) > F (A ? B) + F (A ? B). Unless
otherwise stated, we consider functions which are non-negative (i.e., such that F (A) > 0 for all A ?
V ), and that satisfy F (?) = F (V ) = 0. Usual examples are symmetric submodular functions, i.e.,
such that ?A ? V, F (V \A) = F (A), which are known to always have non-negative values. We give
several examples in Section 4; for illustrating the concepts introduced in this section and Section 3,
Pp?1
we will consider the cut in an undirected chain graph, i.e., F (A) = j=1 |(1A )j ? (1A )j+1 |.
Lov?asz extension. Given any set-function RF such that F (V ) = F (?) = 0, one can define its
Lov?asz extension f : Rp ? R, as f (w) = R F ({w > ?})d? (see, e.g., [11] for this particular
formulation). The Lov?asz extension is convex if and only if F is submodular. Moreover, f is
piecewise-linear and for all A ? V , f (1A ) = F (A), that is, it is indeed an extension from 2V
(which can be identified to {0, 1}p through indicator vectors) to Rp . Finally, it is always positively
Pp?1
homogeneous. For the chain graph, we obtain the usual total variation f (w) = j=1 |wj ? wj+1 |.
Base polyhedron. We denote by B(F ) = {s ? Rp , ?A ? V,
P s(A) 6 F (A), s(V ) = F (V )}
the base polyhedron [12], where we use the notation s(A) = k?A sk . One important result in
submodular analysis is that if F is a submodular function, then we have a representation of f as a
maximum of linear functions [12, 11], i.e., for all w ? Rp , f (w) = maxs?B(F ) w? s. Moreover,
instead of solving a linear program with 2p ? 1 contraints, a solution s may be obtained by the
following ?greedy algorithm?: order the components of w in decreasing order wj1 > ? ? ? > wjp ,
and then take for all k ? {1, . . . , p}, sjk = F ({j1 , . . . , jk }) ? F ({j1 , . . . , jk?1 }).
Tight and inseparable sets. The polyhedra U = {w ? Rp , f (w) 6 1} and B(F ) are polar to each
other (see, e.g., [13] for definitions and properties of polar sets). Therefore, the facial structure of U
may be obtained from the one of B(F ). Given s ? B(F ), a set A ? V is said tight if s(A) = F (A).
It is known that the set of tight sets is a distributive lattice, i.e., if A and B are tight, then so are A?B
and A ? B [12, 11]. The faces of B(F ) are thus intersections of hyperplanes {s(A) = F (A)} for
A belonging to certain distributive lattices (see Prop. 3). A set A is said separable if there exists a
non-trivial partition of A = B ? C such that F (A) = F (B) + F (C). A set is said inseparable if it
is not separable. For the cut in an undirected graph, inseparable sets are exactly connected sets.
3 Properties of the Lov?asz Extension
In this section, we derive properties of the Lov?asz extension for submodular functions, which go
beyond convexity and homogeneity. Throughout this section, we assume that F is a non-negative
submodular set-function that is equal to zero at ? and V . This immediately implies that f is invariant
by addition of any constant vector (that is, f (w + ?1V ) = f (w) for all w ? Rp and ? ? R), and
that f (1V ) = F (V ) = 0. Thus, contrary to the non-decreasing case [3], our regularizers are not
norms. However, they are norms on the hyperplane {w? 1V = 0} as soon as for A 6= ? and A 6= V ,
F (A) > 0 (which we assume for the rest of this paper).
We now show that the Lov?asz extension is the convex envelope of a certain combinatorial function
which does depend on all levets sets {w > ?} of w ? Rp (see proof in [14]):
Proposition 1 (Convex envelope) The Lov?asz extension f (w) is the convex envelope of the function
w 7? max??R F ({w > ?}) on the set [0, 1]p + R1V = {w ? Rp , maxk?V wk ? mink?V wk 6 1}.
2
w1=w2
w3> w1>w2
(1,0,1)/F({1,3})
w1> w3>w2
(1,0,0)/F({1})
w2=w3
w1> w2>w3
(0,0,1)/F({3})
w3> w2>w1
(0,0,1)
(0,1,1)/F({2,3})
w2> w3>w1
(0,1,0)/F({2})
w2> w1>w3 w1=w3
(0,1,1)
(1,0,1)/2
(0,1,0)/2
(1,0,0)
(1,1,0)
(1,1,0)/F({1,2})
Figure 1: Top: Polyhedral level set of f (projected on the set w? 1V = 0), for 2 different submodular
symmetric functions of three variables, with different inseparable sets leading to different sets of
extreme points; changing values of F may make some of the extreme points disappear. The various
extreme points cut the space into polygons where the ordering of the component is fixed. Left:
F (A) = 1|A|?{1,2} , leading to f (w) = maxk wk ? mink wk (all possible extreme points); note
that the polygon need not be symmetric in general. Right: one-dimensional total variation on three
nodes, i.e., F (A) = |11?A ? 12?A | + |12?A ? 13?A |, leading to f (w) = |w1 ? w2 | + |w2 ? w3 |, for
which the extreme points corresponding to the separable set {1, 3} and its complement disappear.
Note the difference with the result of [3]: we consider here a different set on which we compute the
convex envelope ([0, 1]p + R1V instead of [?1, 1]p ), and not a function of the support of w, but of
all its level sets.1 Moreover, the Lov?asz extension is a convex relaxation of a function of level sets
(of the form {w > ?}) and not of constant
R sets (of the form {w = ?}). It would have been perhaps
more intuitive to consider for example R F ({w = ?})d?, since it does not depend on the ordering
of the values that w may take; however, to the best of our knowledge, the latter function does not
lead to a convex function amenable to polynomial-time algorithms. This definition through level
sets will generate some potentially undesired behavior (such as the well-known staircase effect for
the one-dimensional total variation), as we show in Section 6.
The next proposition describes the set of extreme points of the ?unit ball? U = {w, f (w) 6 1},
giving a first illustration of sparsity-inducing effects (see example in Figure 1, in particular for the
one-dimensional total variation).
Proposition 2 (Extreme points) The extreme points of the set U ? {w? 1V = 0} are the projections
of the vectors 1A /F (A) on the plane {w? 1V = 0}, for A such that A is inseparable for F and V \A
is inseparable for B 7? F (A ? B) ? F (A).
Partially ordered sets and distributive lattices. A subset D of 2V is a (distributive) lattice if it
is invariant by intersection and union. We assume in this paper that all lattices contain the empty
set ? and the full set V , and we endow the lattice with the inclusion order. Such lattices may be
represented as a partially ordered set (poset) ?(D) = {A1 , . . . , Am } (with order relationship <),
where the sets Aj , j = 1, . . . , m, form a partition of V (we always assume a topological ordering
of the sets, i.e., Ai < Aj ? i > j). As illustrated in Figure 2, we go from D to ?(D), by
considering all maximal chains in D and the differences between consecutive sets. We go from
?(D) to D, by constructing all ideals of ?(D), i.e., sets J such that if an element of ?(D) is lower
than an element of J, then it has to be in J (see [12] for more details, and an example in Figure 2).
Distributive lattices and posets are thus in one-to-one correspondence. Throughout this section, we
go back and forth between these two representations. The distributive lattice D will correspond to
all authorized level sets {w > ?} for w in a single face of U, while the elements of the poset ?(D)
are the constant sets (over which w is constant), with the order between the subsets giving partial
constraints between the values of the corresponding constants.
Faces of U. The faces of U are characterized by lattices D, with their corresponding posets ?(D) =
?
{A1 , . . . , Am }. We denote by UD
(and by UD its closure) the set of w ? Rp such that (a) f (w) 6 1,
(b) w is piecewise constant with respect to ?(D), with value vi on Ai , and (c) for all pairs (i, j),
1
Note that the support {w = 0} is a constant set which is the intersection of two level sets.
3
{2}
{1,2}
{2}
{1,2,5,6}
{5,6}
{1,2,3,4,5,6}
{5,6} {2,5,6}
{2,3,4,5,6}
{1}
{3,4}
Figure 2: Left: distributive lattice with 7 elements in 2{1,2,3,4,5,6} , represented with the Hasse diagram corresponding to the inclusion order (for a partial order, a Hasse diagram connects A to B if
A is smaller than B and there is no C such that A is smaller than C and C is smaller than B). Right:
corresponding poset, with 4 elements that form a partition of {1, 2, 3, 4, 5, 6}, represented with the
Hasse diagram corresponding to the order < (a node points to its immediate smaller node according
to <). Note that this corresponds to an ?allowed? lattice (see Prop. 3) for the one-dimensional total
variation.
Ai < Aj ? vi > vj . For certain lattices D, these will be exactly the relative interiors of all faces
of U (see proof in [14]):
Proposition 3 (Faces of U) The (non-empty) relative interiors of all faces of U are exactly of the
?
form UD
, where D is a lattice such that:
(i) the restriction of F to D is modular, i.e., for all A, B ? D, F (A)+F (B) = F (A?B)+F (A?B),
(ii) for all j ? {1, . . . , m}, the set Aj is inseparable for the function Cj 7? F (Bj?1 ? Cj ) ?
F (Bj?1 ), where Bj?1 is the union of all ancestors of Aj in ?(D),
(iii) among all lattices corresponding to the same unordered partition, D is a maximal element of
the set of lattices satisfying (i) and (ii).
Among the three conditions, the second one is the easiest to interpret, as it reduces to having constant
sets which are inseparable for certain submodular functions, and for cuts in an undirected graph,
these will exactly be connected sets. Note also that extreme points from Prop. 2 are recovered with
D = {?, A, V }.
Since we are able to characterize all faces of U (of all dimensions) with non-empty relative interior,
we have a partition of the space and any w ? Rp which is not proportional to 1V , will be, up to
the strictly positive constant f (w), in exactly one of these relative interiors of faces; we refer to this
lattice as the lattice associated to w. Note that from the face w belongs to, we have strong constraints
on the constant sets, but we may not be able to determine all level sets of w, because only partial
constraints are given by the order on ?(D). For example, in Figure 2 for the one-dimensional total
variation, w2 may be larger or smaller than w5 = w6 (and even potentially equal, but with zero
probability, see Section 6).
4 Examples of Submodular Functions
In this section, we provide examples of submodular functions and of their Lov?asz extensions. Some
are well-known (such as cut functions and total variations), some are new in the context of supervised
learning (regular functions), while some have interesting effects in terms of clustering or outlier
detection (cardinality-based functions).
Symmetrization. From any submodular function G, one may define F (A) = G(A) + G(V \A) ?
G(?) ? G(V ), which is symmetric. Potentially interesting examples which are beyond the scope of
this paper are mutual information, or functions of eigenvalues of submatrices [3].
Cut functions. Given a set of nonnegative weights d : V ? P
V ? R+ , define the cut F (A) =
P
d(k,
j).
The
Lov?
a
sz
extension
is
equal
to
f
(w)
=
k,j?V d(k, j)(wk ? wj )+ (which
k?A,j?V \A
shows submodularity because f is convex), and is often referred to as the total variation. If the
weight function d is symmetric, then the submodular function is also symmetric. In this case, it can
be shown that inseparable sets for functions A 7? F (A ? B) ? F (B) are exactly connected sets.
Hence, by Props. 3 and 6, constant sets are connected sets, which is the usual justification behind
the total variation. Note however that some configurations of connected sets are not allowed due
to the other conditions in Prop. 3 (see examples in Section 6). In Figure 5 (right plot), we give an
example of the usual chain graph, leading to the one-dimensional total variation [4, 5]. Note that
these functions can be extended to cuts in hypergraphs, which may have interesting applications in
computer vision [6]. Moreover, directed cuts may be interesting to favor increasing or decreasing
jumps along the edges of the graph.
4
0
?5
5
weights
5
weights
weights
5
0
?5
5
10
15
20
estimation error
0.4
0
?5
5
10
15
20
5
10
15
20
0.3
TV
robust TV
robust TV ? 2
0.2
0.1
0
?2
0
2
2
4
6
log(? )
Figure 3: Three left plots: Estimation of noisy piecewise constant 1D signal with outliers (indices
5 and 15 in the chain of 20 nodes). Left: original signal. Middle: best estimation with total variation
(level sets are not correctly estimated). Right: best estimation with the robust total variation based on
noisy cut functions (level sets are correctly estimated, with less bias and with detection of outliers).
Right plot: clustering estimation error vs. noise level, in a sequence of 100 variables, with a single
jump, where noise of variance one is added, with 5% of outliers (averaged over 20 replications).
Regular functions and robust total variation. By partial minimization, we obtain so-called
regular functions [6, 5]. One application is ?noisy cut functions?: for a given weight function
d : W ? W ? R+ , where each node in W is uniquely associated in a node in V , we consider
the submodular function obtained as the minimum cut adaptedP
to A in the augmented graph (see
an example in the right plot of Figure 5): F (A) = minB?W
k?B, j?W \B d(k, j) + ?|A?B|.
This allows for robust versions of cuts, where some gaps may be tolerated; indeed, compared to
having directly a small cut for A, B needs to have a small cut and be close to A, thus allowing
some elements to be removed or added to A in order to lower the cut. See examples in Figure 3,
illustrating the behavior of the type of graph displayed in the bottom-right plot of Figure 5, where
the performance of the robust total variation is significantly more stable in presence of outliers.
Cardinality-based functions. For F (A) = h(|A|) where h is such that h(0) = h(p) = 0 and h
concave, we obtain a submodular function, and a Lov?asz extension that depends on the order statisPp?1
tics of w, i.e., if wj1 > ? ? ? > wjp , then f (w) = k=1 h(k)(wjk ?wjk+1 ). While these examples do
not provide significantly different behaviors for the non-decreasing submodular functions explored
by [3] (i.e., in terms of support), they lead to interesting behaviors here in terms of level sets, i.e.,
they will make the components w cluster together in specific ways. Indeed, as shown in Section 6,
allowed constant sets A are such that A is inseparable for the function C 7? h(|B ? C|) ? h(|B|)
(where B ? V is the set of components with higher values than the ones in A), which imposes that
the concave function h is not linear on [|B|, |B|+|A|]. We consider the following examples:
Pp
1. F (A) = |A| ? |V \A|, leading to f (w) = i,j=1 |wi ? wj |. This function can thus be also
seen as the cut in the fully connected graph. All patterns of level sets are allowed as the
function h is strongly concave (see left plot of Figure 4). This function has been extended
in [15] by considering situations where each wj is a vector, instead of a scalar, and replacing
the absolute value |wi ? wj | by any norm kwi ? wj k, leading to convex formulations for
clustering.
2. F (A) = 1 if A 6= ? and A 6= V , and 0 otherwise, leading to f (w) = maxi,j |wi ? wj |. Two
large level sets at the top and bottom, all the rest of the variables are in-between and separated
(Figure 4, second plot from the left).
3. F (A) = max{|A|, |V \A|}. This function is piecewise affine, with only one kink, thus only
one level set of cardinalty greater than one (in the middle) is possible, which is observed in
Figure 4 (third plot from the left). This may have applications to multivariate outlier detection
by considering extensions similar to [15].
5 Optimization Algorithms
In this section, we present optimization methods for minimizing convex objective functions regularized by the Lov?asz extension of a submodular function. These lead to convex optimization problems,
which we tackle using proximal methods (see, e.g., [16, 17] and references therein). We first start
by mentioning that subgradients may easily be derived (but subgradient descent is here rather inefficient as shown in Figure 5). Moreover, note that with the square loss, the regularization paths are
piecewise affine, as a direct consequence of regularizing by a polyhedral function.
5
10
5
5
5
5
0
?5
?10
0
0
0
?5
0.01
0.02
?
0.03
?10
0
weights
10
weights
10
weights
weights
10
?5
1
?
2
3
?10
0
0
?5
0.2
?
0.4
?10
0
1
?
2
3
Figure 4: Left: Piecewise linear regularization paths of proximal problems (Eq. (1)) for different
functions of cardinality. From left to right: quadratic function (all level sets allowed), second example in Section 4 (two large level sets at the top and bottom), piecewise linear with two pieces (a
single large level set in the middle). Right: Same plot for the one-dimensional total variation. Note
that in all these particular cases the regularization paths for orthogonal designs are agglomerative
(see Section 5), while for general designs, they would still be piecewise affine but not agglomerative.
Subgradient. From f (w) = maxs?B(F ) s? w and the greedy algorithm2 presented in Section 2,
one can easily get in polynomial time one subgradient as one of the maximizers s. This allows to
use subgradient descent, with slow convergence compared to proximal methods (see Figure 5).
Proximal problems through sequences of submodular function minimizations (SFMs). Given
regularized problems of the form minw?Rp L(w) + ?f (w), where L is differentiable with Lipschitzcontinuous gradient, proximal methods have been shown to be particularly efficient first-order
methods (see, e.g., [16]). In this paper, we use the method ?ISTA? and its accelerated variant
?FISTA? [16]. To apply these methods, it suffices to be able to solve efficiently:
min 1 kw
w?Rp 2
? zk22 + ?f (w),
(1)
which we refer to as the proximal problem. It is known that solving the proximal problem is related
to submodular function minimization (SFM). More precisely, the minimum of A 7? ?F (A) ? z(A)
may be obtained by selecting negative components of the solution of a single proximal problem [12,
11]. Alternatively, the solution of the proximal problem may be obtained by a sequence of at most p
submodular function minimizations of the form A 7? ?F (A) ? z(A), by a decomposition algorithm
adapted from [18], and described in [11].
Thus, computing the proximal operator has polynomial complexity since SFM has polynomial complexity. However, it may be too slow for practical purposes, as the best generic algorithm has
complexity O(p6 ) [19]3 . Nevertheless, this strategy is efficient for families of submodular functions
for which dedicated fast algorithms exist:
? Cuts: Minimizing the cut or the partially minimized cut, plus a modular function, may be
done with a min-cut/max-flow algorithm [see, e.g., 6, 5]. For proximal methods, we need in
fact to solve an instance of a parametric max-flow problem, which may be done using other
efficient dedicated algorithms [21, 5] than the decomposition algorithm derived from [18].
? Functions of cardinality: minimizing functions of the form A 7? ?F (A) ? z(A) can be done
in closed form by sorting the elements of z.
Proximal problems through minimum-norm-point algorithm. In the generic case (i.e., beyond
cuts and cardinality-based functions), we can follow [12, 3]: since f (w) is expressed as a minimum of linear functions, the problem reduces to the projection on the polytope B(F ), for which we
happen to be able to easily maximize linear functions (using the greedy algorithm described in Section 2). This can be tackled efficiently by the minimum-norm-point algorithm [12], which iterates
between orthogonal projections on affine subspaces and the greedy algorithm for the submodular
function4. We compare all optimization methods on synthetic examples in Figure 5.
2
The greedy algorithm to find extreme points of the base polyhedron should not be confused with the greedy
algorithm (e.g., forward selection) that is common in supervised learning/statistics.
3
Note that even in the case of symmetric submodular functions, where more efficient algorithms in O(p3 )
for submodular function minimization (SFM) exist [20], the minimization of functions of the form ?F (A) ?
z(A) is provably as hard as general SFM [20].
4
Interestingly, when used for submodular function minimization (SFM), the minimum-norm-point algorithm has no complexity bound but is empirically faster than algorithms with such bounds [12].
6
0
10
f(w)?min(f)
?5
10
W
fista?generic
ista?generic
subgradient
fista?card
ista?card
subgradient?sqrt
?10
10
?15
10
0
2
4
6
8
10
time (seconds)
V
Figure 5: Left: Matlab running times of different optimization methods on 20 replications of a leastsquares regression problem with p = 1000 for a cardinality-based submodular function (best seen
in color). Proximal methods with the generic algorithm (using the minimum-norm-point
? algorithm)
are faster than subgradient descent (with two schedules for the learning rate, 1/t or 1/ t). Using the
dedicated algorithm (which is not available in all situations) is significantly faster. Right: Examples
of graphs (top: chain graph, bottom: hidden chain graph, with sets W and V and examples of a set
A in light red, and B in blue, see text for details).
Proximal path as agglomerative clustering. When ? varies from zero to +?, then the unique
optimal solution of Eq. (1) goes from z to a constant. We now provide conditions under which
the regularization path of the proximal problem may be obtained by agglomerative clustering (see
examples in Figure 4):
Proposition 4 (Agglomerative clustering) Assume that for all sets A, B such that B ? A = ? and
A is inseparable for D 7? F (B ? D) ? F (B), we have:
?C ? A,
|C|
|A| [F (B
? A) ? F (B)] 6 F (B ? C) ? F (B).
(2)
Then the regularization path for Eq. (1) is agglomerative, that is, if two variables are in the same
constant for a certain ? ? R+ , so are they for all larger ? > ?.
As shown in [14], the assumptions required for by Prop. 4 are satisfied by (a) all submodular setfunctions that only depend on the cardinality, and (b) by the one-dimensional total variation?we
thus recover and extend known results from [7, 22, 15].
Adding an ?1 -norm. Following [4], we may add the ?1 -norm kwk1 for additional sparsity of w (on
top of shaping its level sets). The following proposition extends the result for the one-dimensional
total variation [4, 23] to all submodular functions and their Lov?asz extensions:
Proposition 5 (Proximal problem for ?1 -penalized problems) The unique minimizer of 21 kw ?
zk22 + f (w) + ?kwk1 may be obtained by soft-thresholding the minimizers of 21 kw ? zk22 + f (w).
That is, the proximal operator for f + ?k ? k1 is equal to the composition of the proximal operator
for f and the one for ?k ? k1 .
6 Sparsity-inducing Properties
Going from the penalization of supports to the penalization of level sets introduces some complexity
and for simplicity in this section, we only consider the analysis in the context of orthogonal design
matrices, which is often referred to as the denoising problem, and in the context of level set estimation already leads to interesting results. That is, we study the unique global minimum w
? of the
proximal problem in Eq. (1) and make some assumption regarding z (typically z = w? + noise), and
provide guarantees related to the recovery of the level sets of w? . We first start by characterizing the
allowed level sets, showing that the partial constraints defined in Section 3 on faces of {f (w) 6 1}
do not create by chance further groupings of variables (see proof in [14]).
Proposition 6 (Stable constant sets) Assume z ? Rp has an absolutely continuous density with
respect to the Lebesgue measure. Then, with probability one, the unique minimizer w
? of Eq. (1) has
constant sets that define a partition corresponding to a lattice D defined in Prop. 3.
We now show that under certain conditions the recovered constant sets are the correct ones:
7
Theorem 1 (Level set recovery) Assume that z = w? + ??, where ? ? Rp is a standard Gaussian random vector, and w? is consistent with the lattice D and its associated poset ?(D) =
(A1 , . . . , Am ), with values vj? on Aj , for j ? {1, . . . , m}. Denote Bj = A1 ? ? ? ? ? Aj for
j ? {1, . . . , m}. Assume that there exists some constants ?j > 0 and ? > 0 such that:
|C |
|C |
|C |
?Cj ? Aj , F (Bj?1 ?Cj )?F (Bj?1 )? |Ajj | [F (Bj?1 ?Aj )?F (Bj?1 )] > ?j min |Ajj | , 1? |Ajj | , (3)
?i, j ? {1, . . . , m}, Ai < Aj ? vi? ? vj? > ?,
(4)
F (Bj )?F (Bj?1 )
6 ?/4.
(5)
?j ? {1, . . . , m}, ?
|Aj |
Then the unique minimizer w
? of Eq. (1) is associated to the same lattice D than w? , with probability
Pm
Pm
?2 ? 2
? 2 |A |
greater than 1 ? j=1 exp ? 32?2j ? 2 j=1 |Aj | exp ? 2?2 |Ajj |2 .
We now discuss the three main assumptions of Theorem 1 as well as the probability estimate:
? Eq. (3) is the equivalent of the support recovery condition for the Lasso [1] or its extensions [3]. The main difference is that for support recovery, this assumption is always met
for orthogonal designs, while here it is not always met. Interestingly, the validity of level set
recovery implies the agglomerativity of proximal paths (Eq. (2) in Prop. 4).
Note that if Eq. (3) is satisfied only with ?j > 0 (it is then exactly Eq. (2) in Prop. 4), then,
even with infinitesimal noise, one can show that in some cases, the wrong level sets may be
obtained with non vanishing probability, while if ?j is strictly negative, one can show that
in some cases, we never get the correct level sets. Eq. (3) is thus essentially sufficient and
necessary.
? Eq. (4) corresponds to having distinct values of w? far enough from each other.
? Eq. (5) is a constraint on ? which controls the bias of the estimator: if it is too large, then there
may be a merging of two clusters.
? In the probability estimate, the second term is small if all ? 2 |Aj |?1 are small enough (i.e.,
given the noise, there is enough data to correctly estimate the values of the constant sets) and
the third term is small if ? is large enough, to avoid that clusters split.
One-dimensional total variation. In this situation, we always get ?j = 0, but in some cases, it
cannot be improved (i.e., the best possible ?j is equal to zero), and as shown in [14], this occurs
as soon as there is a ?staircase?, i.e., a piecewise constant vector, with a sequence of at least two
consecutive increases, or two consecutive decreases, showing that in the presence of such staircases,
one cannot have consistent support recovery, which is a well-known issue in signal processing (typically, more steps are created). If there is no staircase effect, we have ?j = 1 and Eq. (5) becomes
? 6 ?8 minj |Aj |. If we take ? equal to the limiting value in Eq. (5), then we obtain a probability
? 2 min |A |2
j
j
less than 1 ? 4p exp(? 128?2 max
2 ). Note that we could also derive general results when an adj |Aj |
ditional ?1 -penalty is used, thus extending results from [24]. Finally, similar (more) negative results
may be obtained for the two-dimensional total variation [25, 14].
Clustering with F (A) = |A| ? |V \A|. In this case, we have ?j = |Aj |/2, and Eq. (5) becomes
?
?2
? 6 4p
, leading to the probability of correct support estimation greater than 1 ? 4p exp ? 128p?
2 .
This indicates that the noise variance ? 2 should be small compared to 1/p, which is not satisfactory
and would be corrected with the weighting schemes proposed in [15].
7 Conclusion
We have presented a family of sparsity-inducing norms dedicated to incorporating prior knowledge
or structural constraints on the level sets of linear predictors. We have provided a set of common algorithms and theoretical results, as well as simulations on synthetic examples illustrating the behavior of these norms. Several avenues are worth investigating: first, we could follow current practice in
sparse methods, e.g., by considering related adapted concave penalties to enhance sparsity-inducing
capabilities, or by extending some of the concepts for norms of matrices, with potential applications
in matrix factorization [26] or multi-task learning [27].
Acknowledgements. This paper was partially supported by the Agence Nationale de la Recherche
(MGA Project), the European Research Council (SIERRA Project) and Digiteo (BIOVIZ project).
8
References
[1] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning
Research, 7:2541?2563, 2006.
[2] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M-estimators with decomposable regularizers. In Adv. NIPS, 2009.
[3] F. Bach. Structured sparsity-inducing norms through submodular functions. In Adv. NIPS,
2010.
[4] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the
fused Lasso. J. Roy. Stat. Soc. B, 67(1):91?108, 2005.
[5] A. Chambolle and J. Darbon. On total variation minimization and surface evolution using
parametric maximum flows. International Journal of Computer Vision, 84(3):288?307, 2009.
[6] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts.
IEEE Trans. PAMI, 23(11):1222?1239, 2001.
[7] Z. Harchaoui and C. L?evy-Leduc. Catching change-points with Lasso. Adv. NIPS, 20, 2008.
[8] J.-P. Vert and K. Bleakley. Fast detection of multiple change-points shared by many signals
using group LARS. Adv. NIPS, 23, 2010.
[9] M. Kolar, L. Song, and E. Xing. Sparsistent learning of varying-coefficient models with structural changes. Adv. NIPS, 22, 2009.
[10] H. D. Bondell and B. J. Reich. Simultaneous regression shrinkage, variable selection, and
supervised clustering of predictors with oscar. Biometrics, 64(1):115?123, 2008.
[11] F. Bach. Convex analysis and optimization with submodular functions: a tutorial. Technical
Report 00527714, HAL, 2010.
[12] S. Fujishige. Submodular Functions and Optimization. Elsevier, 2005.
[13] R. T. Rockafellar. Convex Analysis. Princeton University Press, 1997.
[14] F. Bach. Shaping level sets with submodular functions. Technical Report 00542949-v2, HAL,
2011.
[15] T. Hocking, A. Joulin, F. Bach, and J.-P. Vert. Clusterpath: an algorithm for clustering using
convex fusion penalties. In Proc. ICML, 2011.
[16] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[17] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Technical Report 00613125, HAL, 2011.
[18] H. Groenevelt. Two algorithms for maximizing a separable concave function over a polymatroid feasible region. European Journal of Operational Research, 54(2):227?236, 1991.
[19] J. B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization.
Mathematical Programming, 118(2):237?251, 2009.
[20] M. Queyranne. Minimizing symmetric submodular functions. Mathematical Programming,
82(1):3?12, 1998.
[21] G. Gallo, M. D. Grigoriadis, and R. E. Tarjan. A fast parametric maximum flow algorithm and
applications. SIAM Journal on Computing, 18(1):30?55, 1989.
[22] H. Hoefling. A path algorithm for the fused Lasso signal approximator. Technical Report
0910.0526v1, arXiv, 2009.
[23] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse
coding. Journal of Machine Learning Research, 11:19?60, 2010.
[24] A. Rinaldo. Properties and refinements of the fused Lasso. Ann. Stat., 37(5):2922?2952, 2009.
[25] V. Duval, J.-F. Aujol, and Y. Gousseau. The TVL1 model: A geometric point of view. Multiscale Modeling and Simulation, 8(1):154?189, 2009.
[26] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In Adv.
NIPS 17, 2005.
[27] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243?272, 2008.
9
| 4457 |@word illustrating:3 version:1 middle:3 polynomial:5 norm:19 closure:1 ajj:4 simulation:2 decomposition:2 configuration:1 selecting:2 ecole:1 interestingly:2 existing:1 recovered:2 current:1 adj:1 happen:1 partition:6 j1:2 designed:1 interpretable:1 plot:9 v:1 greedy:6 plane:1 vanishing:1 short:1 recherche:1 iterates:1 node:6 evy:1 hyperplanes:1 mathematical:2 along:1 mga:1 direct:1 replication:2 polyhedral:2 lov:16 indeed:3 behavior:5 multi:2 decreasing:6 considering:4 cardinality:7 increasing:1 project:4 confused:1 underlying:3 notation:2 moreover:6 becomes:2 provided:1 easiest:1 tic:1 parsimony:1 unified:4 guarantee:3 sapiro:1 reinterpret:1 concave:5 tackle:1 exactly:7 wrong:1 control:1 unit:1 positive:1 consequence:1 path:8 pami:1 inria:1 plus:1 therein:2 mentioning:1 factorization:3 averaged:1 directed:1 practical:1 unique:5 union:2 poset:4 practice:1 pontil:1 submatrices:1 adapting:1 significantly:3 projection:3 vert:2 regular:3 get:3 cannot:2 interior:4 selection:4 operator:5 close:1 context:4 bleakley:1 restriction:1 equivalent:1 sparsistent:1 maximizing:1 go:5 convex:21 focused:1 simplicity:1 recovery:10 immediately:1 decomposable:1 estimator:2 variation:23 justification:1 limiting:1 programming:2 homogeneous:1 aujol:1 element:8 roy:1 satisfying:1 jk:2 particularly:1 digiteo:1 cut:25 bottom:4 observed:1 wj:8 region:1 connected:6 adv:6 ordering:3 decrease:1 removed:1 knight:1 convexity:1 complexity:5 depend:4 solving:2 tight:4 easily:3 various:2 polygon:2 represented:3 separated:1 informatique:1 fast:5 distinct:1 saunders:1 whose:1 modular:3 larger:3 solve:2 rennie:1 otherwise:3 favor:1 statistic:4 noisy:5 online:1 sequence:5 eigenvalue:1 differentiable:1 propose:1 maximal:2 fr:1 relevant:1 forth:1 intuitive:1 inducing:9 wjk:2 kink:1 convergence:1 empty:4 cluster:3 extending:2 sierra:2 posets:2 derive:3 stat:2 eq:16 strong:1 soc:1 implies:2 met:2 submodularity:1 correct:3 lars:1 sjk:1 suffices:1 proposition:8 leastsquares:1 extension:19 strictly:2 hold:1 exp:4 scope:1 bj:10 sought:1 inseparable:11 consecutive:3 ditional:1 purpose:1 polar:2 estimation:7 proc:1 combinatorial:1 symmetrization:1 council:1 create:1 minimization:11 always:7 gaussian:1 normale:1 rather:1 avoid:1 shrinkage:2 varying:1 jaakkola:1 endow:1 derived:3 ponce:1 polyhedron:4 indicates:1 am:3 zk22:3 elsevier:1 minimizers:1 typically:2 hidden:1 ancestor:1 going:1 france:1 provably:1 issue:1 among:2 mutual:1 equal:10 never:1 having:3 evgeniou:1 kw:3 yu:2 icml:1 minimized:1 report:4 piecewise:10 leduc:1 homogeneity:1 cheaper:1 beck:1 connects:1 lebesgue:1 detection:9 w5:1 introduces:1 extreme:10 light:1 behind:1 regularizers:2 chain:7 amenable:1 edge:1 algorithm2:1 partial:5 necessary:1 minw:1 facial:1 orthogonal:4 unless:1 biometrics:1 catching:1 theoretical:3 instance:2 soft:1 modeling:1 teboulle:1 lattice:21 subset:4 predictor:6 veksler:1 too:2 characterize:1 varies:1 function4:1 proximal:22 tolerated:1 synthetic:2 rosset:1 density:1 international:1 negahban:1 siam:2 enhance:1 together:1 fused:3 w1:9 central:1 satisfied:2 classically:1 inefficient:1 leading:8 zhao:1 potential:1 de:2 unordered:1 coding:1 wk:6 coefficient:1 rockafellar:1 satisfy:1 depends:2 vi:3 piece:1 view:1 closed:1 francis:2 sup:1 start:2 red:1 recover:1 parallel:1 capability:1 xing:1 contribution:1 orlin:1 square:1 variance:2 who:1 efficiently:2 correspond:1 worth:1 sqrt:1 simultaneous:1 minj:1 definition:3 infinitesimal:1 energy:1 pp:3 associated:5 proof:4 knowledge:4 color:1 cj:4 shaping:3 schedule:1 jenatton:1 back:1 higher:1 supervised:3 follow:3 improved:1 formulation:2 done:3 strongly:2 chambolle:1 hoefling:1 p6:1 correlation:1 replacing:1 multiscale:1 defines:1 aj:16 perhaps:1 scientific:1 hal:3 effect:4 validity:1 concept:3 staircase:4 contain:1 evolution:1 regularization:8 hence:1 hasse:3 symmetric:10 satisfactory:1 illustrated:1 undesired:1 uniquely:1 dedicated:4 image:1 regularizing:1 boykov:1 common:2 polymatroid:1 empirically:1 extend:1 interpretation:1 hypergraphs:1 interpret:1 refer:2 composition:1 imposing:1 ai:4 smoothness:1 automatic:1 erieure:1 pm:2 consistency:1 inclusion:2 submodular:46 groenevelt:1 stable:2 reich:1 surface:1 base:3 add:1 multivariate:1 agence:1 belongs:1 certain:9 gallo:1 kwk1:2 seen:3 minimum:8 greater:4 additional:1 impose:1 determine:1 maximize:1 ud:3 signal:6 ii:2 full:2 harchaoui:1 multiple:1 reduces:2 technical:4 match:1 characterized:1 faster:4 bach:8 ravikumar:1 a1:4 variant:1 regression:2 vision:2 essentially:1 arxiv:1 addition:1 laboratoire:1 diagram:3 envelope:6 rest:2 w2:11 minb:1 asz:15 kwi:1 fujishige:1 undirected:3 contrary:1 flow:4 call:2 r1v:2 structural:2 presence:4 ideal:1 iii:1 enough:4 split:1 w3:9 identified:1 lasso:6 regarding:1 avenue:1 penalty:5 song:1 queyranne:1 matlab:1 induces:1 zabih:1 generate:1 exist:2 tutorial:1 estimated:2 correctly:3 tibshirani:1 darbon:1 blue:1 group:1 nevertheless:1 changing:1 v1:1 imaging:1 graph:14 relaxation:1 subgradient:7 inverse:1 oscar:2 extends:1 throughout:3 family:2 p3:1 sfm:5 submatrix:1 bound:3 guaranteed:1 tackled:1 correspondence:1 duval:1 topological:1 quadratic:1 nonnegative:1 adapted:2 constraint:6 precisely:1 grigoriadis:1 min:5 subgradients:1 separable:4 hocking:1 structured:2 tv:3 according:1 ball:1 belonging:1 smaller:6 describes:1 wi:3 outlier:10 invariant:2 bondell:1 discus:1 available:2 tightest:1 apply:1 v2:1 generic:6 slower:1 rp:17 original:1 top:5 clustering:11 running:1 kwkq:1 giving:2 k1:2 disappear:2 classical:2 objective:1 added:2 already:1 occurs:1 strategy:1 parametric:3 usual:4 traditional:1 said:3 gradient:1 subspace:1 link:1 card:2 distributive:7 polytope:1 agglomerative:6 trivial:1 w6:1 index:4 relationship:1 illustration:1 minimizing:4 kolar:1 potentially:3 negative:7 stated:1 mink:2 design:5 contraints:1 allowing:1 descent:3 displayed:1 immediate:1 situation:5 maxk:2 extended:2 team:1 tarjan:1 introduced:1 complement:1 namely:1 paris:1 subvector:1 pair:1 required:1 nip:6 trans:1 beyond:3 able:4 pattern:1 sparsity:11 program:1 rf:1 max:7 wainwright:1 power:1 regularized:2 indicator:2 zhu:1 scheme:1 created:1 wj1:2 text:1 prior:5 review:4 qaa:1 acknowledgement:1 geometric:1 relative:4 fully:1 loss:1 interesting:6 proportional:1 srebro:1 approximator:1 penalization:2 affine:4 sufficient:1 consistent:2 imposes:1 thresholding:2 penalized:1 supported:1 soon:2 tvl1:1 bias:2 face:11 characterizing:1 absolute:1 sparse:3 dimension:2 lipschitzcontinuous:1 forward:1 jump:2 projected:1 refinement:1 far:1 approximate:1 sz:1 global:1 investigating:1 mairal:2 clusterpath:1 alternatively:1 continuous:1 iterative:1 sk:1 robust:6 operational:1 european:2 constructing:1 domain:1 vj:3 joulin:1 main:3 wjp:2 noise:6 allowed:7 ista:3 positively:1 augmented:1 referred:2 en:1 slow:2 explicit:1 third:2 weighting:1 theorem:2 specific:4 showing:3 maxi:1 explored:1 grouping:2 exists:2 maximizers:1 incorporating:1 fusion:1 adding:1 merging:1 labelling:1 margin:1 nationale:1 gap:1 authorized:1 sorting:1 intersection:3 simply:1 explore:1 rinaldo:1 expressed:1 ordered:2 partially:4 scalar:1 corresponds:2 minimizer:3 chance:1 prop:9 obozinski:1 ann:1 shared:1 absence:1 feasible:1 change:6 fista:3 hard:1 contrasted:1 corrected:1 hyperplane:1 denoising:1 total:23 called:1 la:1 highdimensional:1 support:11 latter:1 accelerated:1 absolutely:1 princeton:1 argyriou:1 |
3,819 | 4,458 | Simultaneous Sampling and Multi-Structure Fitting
with Adaptive Reversible Jump MCMC
Trung Thanh Pham, Tat-Jun Chin, Jin Yu and David Suter
School of Computer Science, The University of Adelaide, South Australia
{trung,tjchin,jin.yu,dsuter}@cs.adelaide.edu.au
Abstract
Multi-structure model fitting has traditionally taken a two-stage approach: First,
sample a (large) number of model hypotheses, then select the subset of hypotheses
that optimise a joint fitting and model selection criterion. This disjoint two-stage
approach is arguably suboptimal and inefficient ? if the random sampling did not
retrieve a good set of hypotheses, the optimised outcome will not represent a good
fit. To overcome this weakness we propose a new multi-structure fitting approach
based on Reversible Jump MCMC. Instrumental in raising the effectiveness of our
method is an adaptive hypothesis generator, whose proposal distribution is learned
incrementally and online. We prove that this adaptive proposal satisfies the diminishing adaptation property crucial for ensuring ergodicity in MCMC. Our method
effectively conducts hypothesis sampling and optimisation simultaneously, and
yields superior computational efficiency over previous two-stage methods.
1
Introduction
Multi-structure model fitting is concerned with estimating the multiple instances (or structures) of
a geometric model embedded in the input data. The task manifests in applications such as mixture
regression [21], motion segmentation [27, 10], and multi-projective estimation [29]. Such a problem is known for its ?chicken-and-egg? nature: Both data-to-structure assignments and structure
parameters are unavailable, but given the solution of one subproblem, the solution of the other can
be easily derived. In practical settings the number of structures is usually unknown beforehand, thus
model selection is required in conjunction to fitting. This makes the problem very challenging.
A common framework is to optimise a robust goodness-of-fit function jointly with a model selection
criterion. For tractability most methods [25, 19, 17, 26, 18, 7, 31] take a ?hypothesise-then-select?
approach: First, randomly sample from the parameter space a large number of putative model hypotheses, then select a subset of the hypotheses (structures) that optimise the combined objective
function. The hypotheses are typically fitted on minimal subsets [9] of the input data. Depending on
the specific definition of the cost functions, a myriad of strategies have been proposed to select the
best structures, namely tabu search [25], branch-and-bound [26], linear programming [19], dirichlet
mixture clustering [17], message passing [18], graph cut [7], and quadratic programming [31].
While sampling is crucial for tractability, a disjoint two-stage approach raises an awkward situation: If the sampled hypotheses are inaccurate, or worse, if not all valid structures are sampled, the
selection or optimisation step will be affected. The concern is palpable especially for higher-order
geometric models (e.g., fundamental matrices in motion segmentation [27]) where enormous sampling effort is required before hitting good hypotheses (those fitted on all-inlier minimal subsets).
Thus two-stage approaches are highly vulnerable to sampling inadequacies, even with theoretical
assurances on the optimisation step (e.g., globally optimal over the sampled hypotheses [19, 7, 31]).
The issue above can be viewed as the lack of a stopping criterion for the sampling stage. If there
is only one structure, we can easily evaluate the sample quality (e.g., consensus size) on-the-fly
1
and stop as soon as the prospect of obtaining a better sample becomes insignificant [9]. Under
multi-structure data, it is unknown what a suitable stopping criterion is (apart from solving the
overall fitting and model selection problem itself). One can consider iterative local refinement of the
structures or re-sampling after data assignment [7], but the fact remains that if the initial hypotheses
are inaccurate, the results of the subsequent fitting and refinement will be affected.
Clearly, an approach that simultaneously samples and optimises is more appropriate. To this end
we propose a new method for multi-structure fitting and model selection based on Reversible Jump
Markov Chain Monte Carlo (RJMCMC) [12]. By design MCMC techniques directly optimise via
sampling. Despite their popular use [3] the method has not been fully explored in multi-structure
fitting (a few authors have applied Monte Carlo techniques for robust estimation [28, 8], but mostly
to enhance hypothesis sampling on single-structure data). We show how to exploit the reversible
jump mechanism to provide a simple and effective framework for multi-structure model selection.
The bane of MCMC, however, is the difficulty in designing efficient proposal distributions. Adaptive
MCMC techniques [4, 24] promise to alleviate this difficulty by learning the proposal distribution
on-the-fly. Instrumental in raising the efficiency of our RJMCMC approach is a recently proposed
hypothesis generator [6] that progressively updates the proposal distribution using generated hypotheses. Care must be taken in introducing such adaptive schemes, since a chain propagated based
on a non-stationary proposal is non-Markovian, and unless the proposal satisfies certain properties [4, 24], this generally means a loss of asymptotic convergence to the target distribution.
Clearing these technical hurdles is one of our major contributions: Using emerging theory from
adaptive MCMC [23, 4, 24, 11], we prove that the adaptive proposal, despite its origins in robust estimation [6], satisfies the properties required for convergence, most notably diminishing adaptation.
The rest of the paper is organised as follows: Sec. 2 formulates our goal within a clear optimisation
framework, and outlines our RJMCMC approach. Sec. 3 describes the adaptive hypothesis proposal
used in our method, and develops proof that it is a valid adaptive MCMC sampler. We present our
experimental results in Sec. 4 and draw conclusions in Sec. 5.
2
Multi-Structure Fitting and Model Selection
Give input data X = {xi }N
i=1 , usually with outliers, our goal is to recover the instances or structures
?k = {?c }kc=1 of a geometric model M embedded in X. The number of valid structures k is
unknown beforehand and must also be estimated from the data. The problem domain is therefore
the joint space of structure quantity and parameters {k, ?k }. Such a problem is typically solved by
jointly minimising fitting error and model complexity. Similar to [25, 19, 26], we use the AIC [1]
{k ? , ?k?? } = arg min ?2 log L(?k ) + 2?n(?k ).
{k,?k }
Here L(?k ) is the robust data likelihood and n(?k ) the number of parameters to define ?k . We
include a positive constant ? to allow reweighting of the two components. Assuming i.i.d. Gaussian
noise with known variance ?, the above problem is equivalent to minimising the function
N
X
minc ric
f (k, ?k ) =
?
+ ?n(?k ),
(1)
1.96?
i=1
where ric = g(xi , ?c ) is the absolute residual of xi to the c-th structure ?c in ?k . The residuals
are subjected to a robust loss function ?(?) to limit the influence of outliers; we use the biweight
function [16]. Minimising a function like (1) over a vast domain {k, ?k } is a formidable task.
2.1
A reversible jump simulated annealing approach
Simulated annealing has proven to be effective for difficult model selection problems [2, 5]. The
idea is to propagate a Markov chain for the Boltzmann distribution encapsulating (1)
bT (k, ?k ) ? exp(?f (k, ?k )/T )
(2)
where temperature T is progressively lowered until the samples from bT (k, ?k ) converge to the
global minima of f (k, ?k ). Algorithm 1 shows the main body of the algorithm. Under weak regularity assumptions, there exist cooling schedules [5] that will guarantee that as T tends to zero the
samples from the chain will concentrate around the global minima.
2
To simulate bT (k, ?k ) we adopt a mixture of kernels MCMC approach [2]. This involves in each
iteration the execution of a randomly chosen type of move to update {k, ?k }. Algorithm 2 summarises the idea. We make available 3 types of moves: birth, death and local update. Birth and
death moves change the number of structures k. These moves effectively cause the chain to jump
across parameter spaces ?k of different dimensions. It is crucial that these trans-dimensional jumps
are reversible to produce correct limiting behaviour of the chain. The following subsections explain.
Algorithm 1 Simulated annealing for multi-structure fitting and model selection
1: Initialise temperature T and state {k, ?k }.
2: Simulate Markov chain for bT (k, ?k ) until convergence.
3: Lower temperature T and repeat from Step 2 until T ? 0.
Algorithm 2 Reversible jump mixture of kernels MCMC to simulate bT (k, ?k )
Require: Last visited state {k, ?k } of previous chain, probability ? (Sec. 4 describes setting ?).
1: Sample a ? U[0,1] .
2: if a ? ? then
3:
With probability rB (k), attempt birth move, else attempt death move.
4: else
5:
Attempt local update.
6: end if
7: Repeat from Step 1 until convergence (e.g., last V moves all rejected).
2.1.1
Birth and death moves
The birth move propagates {k, ?k } to {k 0 , ?k0 0 }, with k 0 = k + 1. Applying Green?s [12, 22] seminal
theorems on RJMCMC, the move is reversible if it is accepted with probability min{1, A}, where
bT (k 0 , ?k0 0 )[1 ? rB (k 0 )]/k 0 ??k0 0
A=
(3)
?(?k , u) .
bT (k, ?k )rB (k)q(u)
The probability of proposing the birth move is rB (k), where rB (k) = 1 for k = 1, rB (k) = 0.5
for k = 2, . . . , kmax ? 1, and rB (kmax ) = 0. In other words, any move that attempts to move k
beyond the range [1, kmax ] is disallowed in Step 3 of Algorithm 2. The death move is proposed with
probability 1 ? rB (k). An existing structure is chosen randomly and deleted from ?k . The death
move is accepted with probability min{1, A?1 }, with obvious changes to the notations in A?1 .
In the birth move, the extra degrees of freedom required to specify the new item in ?k0 0 are given
by auxiliary variables u, which are in turn proposed by q(u). Following [18, 7, 31], we estimate
parameters of the new item by fitting the geometric model M onto a minimal subset of the data. Thus
u is a minimal subset of X. The size p of u is the minimum number of data required to instantiate
M, e.g., p = 4 for planar homographies, and p = 7 or 8 for fundamental matrices [15]. Our
approach is equivalently minimising (1) over collections {k, ?k } of minimal subsets of X, where
now ?k ? {uc }kc=1 . Taking this view the Jacobian ??k0 0 /?(?k , u) is simply the identity matrix.
Considering only minimal subsets somewhat simplifies the problem, but there are still a colossal
number of possible minimal subsets. Obtaining good overall performance thus hinges on the ability
of proposal q(u) to propose minimal subsets that are relevant, i.e., those fitted purely on inliers of
valid structures in the data. One way is to learn q(u) incrementally using generated hypotheses. We
describe such a scheme [6] in Sec. 3 and prove that the adaptive proposal preserves ergodicity.
2.1.2
Local update
A local update does not change the model complexity k. The move involves randomly choosing a
structure ?c in ?k to update, making only local adjustments to its minimal subset uc . The outcome
is a revised minimal subset u0c , and the move is accepted with probability min{1, A}, where
bT (k, ?k0 )q(uc |?c0 )
.
(4)
A=
bT (k, ?k )q(u0c |?c )
As shown in the above our local update is also accomplished with the adaptive proposal q(u|?), but
this time conditioned on the selected structure ?c . Sec. 3 describes and anlyses q(u|?).
3
3
Adaptive MCMC for Multi-Structure Fitting
Our work capitalises on the hypothesis generation scheme of Chin et al. called Multi-GS [6] originally proposed for robust geometric fitting. The algorithm maintains a series of sampling weights
which are revised incrementally as new hypotheses are generated. This bears similarity to the pioneering Adaptive Metropolis (AM) method of Haario et al. [13]. Here, we prove that our adaptive
proposals q(u) and q(u|?) based on Multi-GS satisfy conditions required to preserve ergodicity.
3.1
The Multi-GS algorithm
Let {?m }M
m=1 aggregate the set of hypotheses fitted on the minimal subsets proposed thus far in all
birth and local update moves in Algorithm 1. To build the sampling weights, first for each xi ? X
we compute its absolute residuals as measured to the M hypotheses, yielding the residual vector
(i)
(i)
(i)
(i)
r(i) := [ r1 r2
(i)
? ? ? rM ].
We then find the permutation
(i)
a(i) := [ a1 a2 ? ? ? aM ]
that sorts the elements in r(i) in non-descending order. The permutation a(i) essentially ranks the
M hypotheses according to the preference of xi ; The higher a hypothesis is ranked the more likely
xi is an inlier to it. The weight wi,j between the pair xi and xj is obtained as
1 (i)
(j)
wi,j = Ih (xi , xj ) := ah ? ah ,
(5)
h
(i)
(j)
where |ah ? ah | is the number of identical elements shared by the first-h elements of a(i) and a(j) .
Clearly wi,j is symmetric with respect to the input pair xi and xj , and wi,i = 1 for all i. To ensure
technical consistency in our later proofs, we add a small positive offset ? to the weight1 , or
wi,j = max(Ih (xi , xj ), ?),
(6)
hence ? ? wi,j ? 1. The weight wi,j measures the correlation of the top h preferences of xi and
xj , and this value is typically high iff xi and xj are inliers from the same structure; Figs. 1 (c)?(g)
illustrate. Parameter h controls the discriminative power of wi,j , and is typically set as a fixed ratio k
of M , i.e., h = dkM e. Experiments suggest that k = 0.1 provides generally good performance [6].
Multi-GS exploits the preference correlations to sample the next minimal subset u = {xst }pt=1 ,
where xst ? X and st ? {1, . . . , N } indexes the particular datum from X; henceforth we regard
u ? {st }pt=1 . The first datum s1 is chosen purely randomly. Beginning from t = 2, the selection of
the t-th member st considers the weights related to the data s1 , . . . , st?1 already present in u. More
specifically, the index st is sampled according to the probabilities
Pt (i) ?
t?1
Y
wsz ,i ,
for i = 1, . . . , N,
(7)
z=1
i.e., if Pt (i) > Pt (j) then i is more likely than j to be chosen as st . A new hypothesis ?M +1 is then
fitted on u and the weights are updated in consideration of ?M +1 . Experiments comparing sampling
efficiency (e.g., all-inlier minimal subsets produced per unit time) show that Multi-GS is superior
over previous guided sampling schemes, especially on multi-structure data; See [6] for details.
3.2
Is Multi-GS a valid adaptive MCMC proposal?
Our RJMCMC scheme in Algorithm 2 depends on the Multi-GS-inspired adaptive proposals qM (u)
and qM (u|?), where we now add the subscript M to make explicit their dependency on the set of
N
aggregated hypotheses {?m }M
m=1 as well as the weights {wi,j }i,j=1 they induce. The probability of
p
proposing a minimal subset u = {st }t=1 from qM (u) can be calculated as
"p?1
#?1
d
Y
K
1 Y
T
wsa ,sb
1
wse
,
(8)
qM (u) =
N
e=1
a<b
b?p
d=1
1
It can be shown if both xi and xj are uniformly distributed outliers, the expected value of wi,j is h/M ,
i.e., a given pair xi and xj will likely have non-zero preference correlation.
4
J
where wi is the column vector [ wi,1 . . . wi,N ]T and
is the sequential Hadamard product over
the given multiplicands. The term with the inverse in Eq. (8) relates to the normalising constants for
Eq. (7). As an example, the probability of selecting the minimal subset u = {s1 , s2 , s3 , s4 } is
1
ws1 ,s2 ws1 ,s3 ws2 ,s3 ws1 ,s4 ws2 ,s4 ws3 ,s4
qM (u) =
.
T
N 1 ws1 1T (ws1 ws2 )1T (ws1 ws2 ws3 )
The local update proposal qM (u|?) differs only in the manner in which the first datum xs1 is selected. Instead of chosen purely randomly, the first index s1 is sampled according to
O(g(xi , ?))
Ps1 (i) ? exp ?
,
for i = 1, . . . , N,
(9)
n
where O(g(xi , ?)) is the order statistic of the absolute residual g(xi , ?) as measured to ?; to define
qM (u|?) the 1/N term in Eq. (8) is simply replaced with the appropriate probability from Eq. (9).
For local updates an index i is more likely to be chosen as s1 if xi is close to ?. Parameter n relates
to our prior belief of the minimum number of inliers per structure; we fix this to n = 0.1N .
Since our proposal distributions are updated with the arrival of new hypotheses, the corresponding
transition probabilities are inhomogeneous (they change with time) and the chain non-Markovian
(the transition to a future state depends on all previous states). We aim to show that such continual adaptations with Multi-GS will still lead to the correct target distribution (2). First we restate
Theorem 1 in [11] which is distilled from other work on Adaptive MCMC [23, 4, 24].
Theorem 1. Let Z = {Zn : n > 0} be a stochastic process on a compact state space ? evolving
according to a collection of transition kernels
Tn (z, z 0 ) = pr(Zn+1 |Zn = z, Zn?1 = zn?1 , . . . , Z0 = z0 ),
and let p(z) be the distribution of Zn . Suppose for every n and z0 , . . . , zn?1 ? ? and for some
distribution ?(z) on ?,
X
?(zn )Tn (zn , zn+1 ) = ?(zn+1 ),
(10)
0
zn
0
|Tn+k (z, z ) ? Tn (z, z )| ? an ck , an = O(n?r1 ), ck = O(k ?r2 ), r1 , r2 > 0,
0
0
Tn (z, z ) ? ?(z ), > 0,
where does not depend on n, z0 , . . . , zn?1 . Then, for any initial distribution p(z0 ) for Z0 ,
sup |p(zn ) ? ?(zn )| ? 0 for n ? ?.
(11)
(12)
zn
Diminishing adaptation. Eq. (11) dictates that the transition kernel, and thus the proposal distribution in the Metropolis-Hastings updates in Eqs. (3) and (4), must converge to a fixed distribution,
i.e., the adaptation must diminish. To see that this occurs naturally in qM (u), first we show that wi,j
for all i, j converges as M increases. Without loss of generality assume that b new hypotheses are
0
generated between successive weight updates wi,j and wi,j
. Then,
(i)
0(j)
|a0(i)
(i)
(j)
(i)
(j)
(j)
|a
k(M +b) ? ak(M +b) |
|akM ? akM |
|akM ? akM |
? akM | ? b(k + 1)
? lim kM
?
?
lim
M ??
M ??
k(M + b)
kM
k(M + b)
kM
(i)
(j)
(i)
(j)
|a
? akM |/M
? akM |/M ? b(k + 1)/M
|a
= lim kM
? kM
= 0,
M ??
k + kb/M
k
0
lim wi,j ? wi,j =
M ??
where a0(i) is the revised preference of xi in consideration of the b new hypotheses. The result is
based on the fact that the extension of b hypotheses will only perturb the overlap between the top-k
percentile of any two preference vectors by at most b(k + 1) items. It should also be noted that the
0
result is not due to wi,j
and wi,j simultaneously vanishing with increasing M ; in general
(i)
(j)
lim |akM ? akM |/M 6= 0
M ??
since a(i) and a(j) are extended and revised as M increases and this may increase their mutual
overlap. Figs. 1 (c)?(g) illustrate the convergence of wi,j as M increases. Using the above result, it
can be shown that the product of any two weights also converges
0 0
0
0
0
lim wi,j
wp,q ? wi,j wp,q = lim wi,j
(wp,q
? wp,q ) + wp,q (wi,j
? wi,j )
M ??
M ??
0 0
0
wp,q ? wp,q + wp,q wi,j
? lim wi,j
? wi,j = 0.
M ??
5
This result is readily extended to the product of any number of weights. To show the convergence
of the normalisation terms in (8), we first observe that the sum of weights is bounded away from 0
?i,
1T wi ? L,
L > 0,
due to the offsetting (6) and the constant element wi,i = 1 in wi (although wi,i will be set to zero to
enforce sampling without replacement [6]). It can thus be established that
T 0
T 0
T
1
1 w ? 1T wi
1
? lim 1 wi ? 1 wi = 0
lim T 0 ? T = lim T i0 T
2
M ??
M ?? 1 wi
M ?? (1 wi )(1 wi )
1 wi
L
since the sum of the weights also converges. The result is readily extended to the inverse of the sum
of any number of Hadamard products of weights, since we have also previously established that the
product of any number of weights converges. Finally, since Eq. (8) involves only multiplications of
convergent quantities, qM (u) will converge to a fixed distribution as the update progresses.
Invariance. Eq. (10) requires that transition probabilities based on qM (u) permits an invariant distribution individually for all M . Since we propose and accept based on the Metropolis-Hastings
algorithm, detailed balance is satisfied by construction [3], which means that a Markov chain propagated based on qM (u) will asymptotically sample from the target distribution.
Uniform ergodicity. Eq. (12) requires that qM (u) for all M be individually ergodic, i.e., the resulting chain using qM (u) is aperiodic and irreducible. Again, since we simulate the target using
Metropolis-Hastings, every proposal has a chance of being rejected, thus implying aperiodicity [3].
Irreducibility is satisfied by the offsetting in (6) and renormalising [20], since this implies that there
is always a non-zero probability of reaching any state (minimal subset) from the current state.
The above results apply for the local update proposal qM (u|?) which differs from qM (u) only in the
(stationary) probability to select the first index s1 . Hence qM (u|?) is also a valid adaptive proposal.
4
Experiments
We compare our approach (ARJMC) against state-of-the-art methods: message passing [18]
(FLOSS), energy minimisation with graph cut [7] (ENERGY), and quadratic programming based on
a novel preference feature [31] (QP-MF). We exclude older methods with known weaknesses, e.g.,
computational inefficiency [19, 17, 26], low accuracy due to greedy search [25], or vulnerability to
outliers [17]. All methods are run in MATLAB except ENERGY which is available in C++2 .
For ARJMC, standard deviation ? in (1) is set as t/1.96, where t is the inlier threshold [9] obtained
using ground truth model fitting results? The same t is provided to the competitors. In Algorithm 1
temperature T is initialiased as 1 and we apply the geometric cooling schedule Tnext = 0.99T .
In Algorithm 2, probability ? is set as equal to current temperature T , thus allowing more global
explorations in the parameter space initially before concentrating on local refinement subsequently.
Such a helpful strategy is not naturally practicable in disjoint two-stage approaches.
4.1
Two-view motion segmentation
The goal is to segment point trajectories X matched across two views into distinct motions [27].
Trajectories of a particular motion can be related by a distinct fundamental matrix F ? R3?3 [15].
Our task is thus to estimate the number of motions k and the fundamental matrices {Fc }kc=1 corresponding to the motions embedded in data X. Note that X may contain false trajectories (outliers).
We estimate fundamental matrix hypotheses from minimal subsets of size p = 8 using the 8-point
method [14]. The residual g(xi , F) is computed as the Sampson distance [15].
We test the methods on publicly available two-view motion segmentation datasets [30]. In particular
we test on the 3- and 4-motion datasets provided, namely breadtoycar, carchipscube, toycubecar,
breadcubechips, biscuitbookbox, cubebreadtoychips and breadcartoychips; see the dataset homepage for more details. Correspondences were established via SIFT matching and manual filtering
was done to obtain ground truth segmentation. Examples are shown in Figs. 1(a) and 1(b).
2
http://vision.csd.uwo.ca/code/#Multi-label optimization
6
(a) breadtoycar dataset with 3 motions (37,
39 and 34 inliers, 56 outliers)
(b) cubebreadtoychips dataset with 4 motions (71, 49, 38 and 81 inliers, 88 outliers)
50
50
50
50
50
100
100
100
100
100
150
150
150
150
150
200
200
200
200
200
250
250
250
250
250
300
300
300
300
50
100
150
200
250
300
50
(c) M = 50
100
150
200
250
300
50
(d) M = 100
100
150
200
250
300
300
50
(e) M = 1000
100
150
200
250
300
(f) M = 5000
50
200
250
300
QP?MF (random)
QP?MF (random)
ENERGY (random)
60
ENERGY (random)
40
FLOSS (random)
FLOSS (random)
ARJMC
35
Segmentation error (%)
Objective function value f(k,?k)
150
70
45
QP?MF (Multi?GS)
ENERGY (Multi?GS)
FLOSS (Multi?GS)
30
25
20
15
0
100
(g) M = 10000
ARJMC
50
QP?MF (Multi?GS)
ENERGY (Multi?GS)
40
FLOSS (Multi?GS)
30
20
10
5
10
15
Time (s)
20
25
0
0
30
(h) Value of function f (k, ?k ) (best viewed in colour)
(j) M = 100
(k) M = 200
5
10
15
Time (s)
20
25
30
(i) Segmentation error (best viewed in colour)
(l) M = 500
(m) M = 1000
Figure 1: (a) and (b) show respectively a 3- and 4-motion dataset (colours show ground truth labelling). To minimise clutter, lines joining false matches are not drawn. (c)?(g) show the evolution
of the matrix of pairwise weights (5) computed from (b) as the number of hypotheses M is increased.
For presentation the data are arranged according to their structure membership, which gives rise to
a 4-block pattern. Observe that the block pattern, hence weights, converge as M increases. (h) and
(i) respectively show performance measures (see text) of four methods on the dataset in (b). (j)?(m)
show the evolution of the labelling result of ARJMC as M increases (only one view is shown).
Figs. 1(c)?(g) show the evolution of the pairwise weights (5) as M increases until 10,000 for the data
in Fig. 1(b). The matrices exhibit a a four-block pattern, indicating strong mutual preference among
inliers from the same structure. This phenomenon allows accurate selection of minimal subsets in
Multi-GS [6]. More pertinently, as we predicted in Sec. 3.2, the weights converge as M increases,
as evidenced by the stabilising block pattern. Note that only a small number of weights are actually
computed in Multi-GS [6]; the full matrix of weights are calculated here for illustration only.
We run ARJMC and record the following performance measures: Value of the objective function
f (k, ?k ) in Eq. (1), and segmentation error. The latter involves assigning each datum xi ? X
to the nearest structure in ?k if the residual is less than the threshold t; else xi is labelled as an
outlier. The overall labelling error is then obtained. The measures are recorded at time intervals
corresponding to the instances when M = 100, 200, . . . , 1000 number of hypotheses generated so
far in Algorithm 1. Median results over 20 repetitions on the data in Fig. 1(b) are shown in Figs. 1(h)
and 1(i). Figs. 1(j)?1(m) depict the evolution of the segmentation result of ARJMC as M increases.
7
For objective comparisons the competing two-stage methods were tested as follows: First, M =
100, 200, . . . , 1000 hypotheses are accumulatively generated (using both uniform random sampling [9] and Multi-GS [6]). A new instance of each method is invoked on each set of M hypotheses.
We ensure that each method returns the true number of structures for all M ; this represents an advantage over ARJMC, since the ?online learning? nature of ARJMC means the number of structures
is not discovered until closer to convergence. Results are also shown in Figs. 1(h) and 1(i).
ARJMC
FLOSS
ENERGY
QP-MF
ARJMC
35.86
27.00
21.30
20.88
18.56
19.83
15.18
18.56
14.55
15.18
9.86
32.07
20.04
17.09
15.19
13.50
13.92
12.66
12.24
11.39
11.60
22.46
54.01
61.60
61.18
56.54
21.94
18.99
18.14
10.97
9.70
9.70
5.39
17.57
11.00
7.92
8.49
7.92
5.79
5.79
5.79
5.79
5.79
15.46
25.87
17.95
17.95
14.86
18.73
17.18
18.92
16.60
18.53
13.71
10.66
18.15
17.76
9.27
13.51
10.04
11.39
14.67
13.51
12.36
13.13
24.36
49.03
31.85
6.95
6.37
4.44
5.21
4.83
5.21
5.21
5.79
5.47
FLOSS
ARJMC
QP-MF
36.92
28.90
19.41
17.51
13.92
11.81
10.76
10.55
10.34
9.70
13.40
QP-MF
ENERGY
81.93
78.92
70.48
48.80
37.95
11.45
9.64
9.64
7.83
8.43
4.47
ENERGY
FLOSS
24.10
15.06
18.07
14.46
13.25
12.05
9.04
11.45
10.24
10.84
16.38
21.82
29.70
23.64
52.73
15.76
36.97
30.30
58.18
12.73
24.24
26.67
49.09
10.30
32.73
28.48
24.24
10.30
30.91
27.27
13.33
9.09
28.48
23.03
9.70
8.48
22.42
27.88
9.70
10.30
26.67
25.45
9.70
8.48
36.36
26.06
9.70
9.09
28.48
23.64
9.70
9.57
7.02
16.23
5.16
breadcartoychip (4 structures)
33, 23, 41 and 58 inliers, 82 outliers
toycubecar (3 structures)
45, 69 and 14 inliers, 72 outliers
FLOSS
ARJMC
21.08
13.25
10.84
11.45
13.25
12.05
11.45
12.05
10.24
10.84
6.96
ARJMC
QP-MF
23.49
16.27
12.65
13.86
12.05
12.05
10.84
10.84
10.84
10.84
9.57
QP-MF
ENERGY
carchipscube (3 structures)
19, 33 and 53 inliers, 60 outliers
ENERGY
M
100
200
300
400
500
600
700
800
900
1000
Time (seconds)
ARJMC
25.22
31.74
24.78
68.70
14.13
26.74
18.91
61.96
10.43
33.48
18.70
54.13
9.57
27.83
18.26
48.48
9.57
27.39
26.30
10.87
8.70
25.87
20.43
8.48
8.91
30.43
21.30
7.17
7.83
21.09
22.17
6.52
7.39
25.22
26.74
6.52
7.17
20.43
25.22
6.52
12.88
9.40
21.57
5.44
breadcubechip (3 structures)
34, 57 and 58 inliers, 81 outliers
QP-MF
M
100
200
300
400
500
600
700
800
900
1000
Time (seconds)
Dataset
# inliers, outliers
ENERGY
breadtoycar (3 structures)
37, 39 and 34 inliers, 56 outliers
FLOSS
Dataset
# inliers, outliers
FLOSS
Firstly, it is clear that the performance of the two-stage methods on both measures are improved
dramatically with the application of Multi-GS for hypothesis generation. From Fig. 1(h) ARJMC is
the most efficient in minimising the function f (k, ?k ); it converges to a low value in significantly
less time. It should be noted however that the other methods are not directly minimising AIC or
f (k, ?k ). The segmentation error (which no method here is directly minimising) thus represents a
more objective performance measure. From Fig. 1(i), it can be seen that the initial error of ARJMC
is much higher than all other methods, a direct consequence of not having yet estimated the true
number of structures. The error is eventually minimised as ARJMC converges. Table 1 which
summarises the results on the other datasets (all using Multi-GS) conveys a similar picture. Further
results on multi-homography detection also yield similar outcomes (see supplementary material).
31.75
26.25
29.00
81.50
23.00
27.25
19.25
75.75
22.75
25.25
18.00
65.00
22.00
26.25
22.50
52.75
22.50
22.50
23.00
45.75
21.75
26.50
20.75
37.75
17.50
26.50
23.00
23.50
21.50
26.50
20.00
18.50
18.75
20.75
15.75
19.75
15.50
23.00
18.25
19.50
11.73
8.14
18.94
4.95
biscuitbookbox (3 structures)
67, 41 and 54 inliers, 97 outliers
Table 1: Median segmentation error (%) at different number of hypotheses M . Time elapsed at
M = 1000 is shown at the bottom. The lowest error and time achieved on each dataset is boldfaced.
5
Conclusions
By design, since our algorithm conducts hypothesis sampling, geometric fitting and model selection
simultaneously, it minimises wastage in the sampling process and converges faster than previous
two-stage approaches. This is evident from the experimental results. Underpinning our novel Reversible Jump MCMC method is an efficient hypothesis generator whose proposal distribution is
learned online. Drawing from new theory on Adaptive MCMC, we prove that our efficient hypothesis generator satisfies the properties crucial to ensure convergence to the correct target distribution.
Our work thus links the latest developments from MCMC optimisation and geometric model fitting.
Acknowledgements. The authors would like to thank Anders Eriksson his insightful comments.
This work was partly supported by the Australian Research Council grant DP0878801.
8
References
[1] H. Akaike. A new look at the statistical model identification. IEEE Trans. on Automatic Control,
19(6):716?723, 1974.
[2] C. Andrieu, N. de Freitas, and A. Doucet. Robust full Bayesian learning for radial basis networks. Neural
Computation, 13:2359?2407, 2001.
[3] C. Andrieu, N. de Freitas, A. Doucet, and M. I. Jordan. An introduction to MCMC for machine learning.
Machine Learning, 50:5?43, 2003.
[4] C. Andrieu and J. Thoms. A tutorial on adaptive MCMC. Statistics and Computing, 18(4), 2008.
[5] S. P. Brooks, N. Friel, and R. King. Classical model selection via simulated annealing. J. R. Statist. Soc.
B, 65(2):503?520, 2003.
[6] T.-J. Chin, J. Yu, and D. Suter. Accelerated hypothesis generation for multi-structure robust fitting. In
European Conf. on Computer Vision, 2010.
[7] A. Delong, A. Osokin, H. Isack, and Y. Boykov. Fast approximate energy minimization with label costs.
In Computer Vision and Pattern Recognition, 2010.
[8] L. Fan and T. Pyln?an?ainen. Adaptive sample consensus for efficient random optimisation. In Int. Symposium on Visual Computing, 2009.
[9] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. of the ACM, 24:381?395, 1981.
[10] S. Gaffney and P. Smyth. Trajectory clustering with mixtures of regression models. In ACM SIG on
Knowledge Discovery and Data Mining, 1999.
[11] P. Giordani and R. Kohn. Adaptive independent Metropolis-Hastings by fast estimation of mixtures of
normals. Journal of Computational and Graphical Statistics, 19(2):243?259, 2010.
[12] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination.
Biometrika, 82(4):711?732, 1995.
[13] H. Haario, E. Saksman, and J. Tamminen. An adaptive Metropolis algorithm. Bernoulli, 7(2):223?242,
2001.
[14] R. Hartley. In defense of the eight-point algorithm. IEEE Trans. on Pattern Analysis and Machine
Intelligence, 19(6):580?593, 1997.
[15] R. Hartley and A. Zisserman. Multiple View Geometry. Cambridge University Press, 2004.
[16] P. J. Huber. Robust Statistics. John Wiley & Sons Inc., 2009.
[17] Y.-D. Jian and C.-S. Chen. Two-view motion segmentation by mixtures of dirichlet process with model
selection and outlier removal. In International Conference on Computer Vision, 2007.
[18] N. Lazic, I. Givoni, B. Frey, and P. Aarabi. FLoSS: Facility location for subspace segmentation. In IEEE
Int. Conf. on Computer Vision, 2009.
[19] H. Li. Two-view motion segmentation from linear programming relaxation. In Computer Vision and
Pattern Recognition, 2007.
[20] D. Nott and R. Kohn. Adaptive sampling for Bayesian variable selection. Biometrika, 92:747?763, 2005.
[21] N. Quadrianto, T. S. Caetano, J. Lim, and D. Schuurmans. Convex relaxation of mixture regression with
efficient algorithms. In Advances in Neural Information Processing Systems, 2010.
[22] S. Richardson and P. J. Green. On Bayesian analysis on mixtures with an unknown number of components.
J. R. Statist. Soc. B, 59(4):731?792, 1997.
[23] G. O. Roberts and J. S. Rosenthal. Coupling and ergodicity of adaptive Markov chain Monte Carlo
algorithms. Journal of Applied Probability, 44:458?475, 2007.
[24] G. O. Roberts and J. S. Rosenthal. Examples of adaptive MCMC. Journal of Computational and Graphical Statistics, 18(2):349?367, 2009.
[25] K. Schinder and D. Suter. Two-view multibody structure-and-motion with outliers through model selection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 28(6):983?995, 2006.
[26] N. Thakoor and J. Gao. Branch-and-bound hypothesis selection for two-view multiple structure and
motion segmentation. In Computer Vision and Pattern Recognition, 2008.
[27] P. H. S. Torr. Motion segmentation and outlier detection. PhD thesis, Dept. of Engineering Science,
University of Oxford, 1995.
[28] P. H. S. Torr and C. H. Davidson. IMPSAC: Synthesis of importance sampling and random sample
consensus. IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(3):354?364, 2003.
[29] E. Vincent and R. Lagani`ere. Detecting planar homographies in an image pair. In International Symposium on Image and Signal Processing and Analysis, 2001.
[30] H. S. Wong, T.-J. Chin, J. Yu, and D. Suter. Dynamic and hierarchical multi-structure geometric model
fitting. In International Conference on Computer Vision, 2011.
[31] J. Yu, T.-J. Chin, and D. Suter. A global optimization approach to robust multi-model fitting. In Computer
Vision and Pattern Recognition, 2011.
9
| 4458 |@word instrumental:2 c0:1 km:5 tat:1 propagate:1 initial:3 inefficiency:1 series:1 selecting:1 existing:1 freitas:2 current:2 comparing:1 assigning:1 ws1:6 must:4 readily:2 yet:1 john:1 subsequent:1 ainen:1 progressively:2 update:15 depict:1 stationary:2 implying:1 instantiate:1 assurance:1 item:3 selected:2 trung:2 greedy:1 haario:2 beginning:1 intelligence:3 vanishing:1 record:1 normalising:1 provides:1 detecting:1 location:1 preference:8 successive:1 firstly:1 direct:1 symposium:2 prove:5 fitting:23 boldfaced:1 manner:1 pairwise:2 huber:1 notably:1 expected:1 multi:37 inspired:1 globally:1 considering:1 increasing:1 becomes:1 provided:2 estimating:1 notation:1 bounded:1 formidable:1 ws2:4 matched:1 homepage:1 what:1 lowest:1 emerging:1 proposing:2 guarantee:1 every:2 continual:1 biometrika:2 rm:1 qm:16 control:2 unit:1 grant:1 arguably:1 before:2 positive:2 engineering:1 frey:1 local:12 tends:1 limit:1 consequence:1 despite:2 ak:1 joining:1 friel:1 oxford:1 optimised:1 subscript:1 au:1 u0c:2 challenging:1 tamminen:1 projective:1 range:1 practical:1 offsetting:2 block:4 differs:2 evolving:1 significantly:1 dictate:1 matching:1 word:1 induce:1 radial:1 suggest:1 onto:1 close:1 selection:18 eriksson:1 kmax:3 influence:1 applying:1 seminal:1 descending:1 wong:1 equivalent:1 thanh:1 latest:1 convex:1 ergodic:1 tabu:1 his:1 retrieve:1 initialise:1 traditionally:1 limiting:1 updated:2 target:5 pt:5 suppose:1 construction:1 smyth:1 programming:4 akaike:1 designing:1 hypothesis:41 origin:1 sig:1 givoni:1 element:4 recognition:4 cut:2 cooling:2 bottom:1 subproblem:1 fly:2 solved:1 caetano:1 prospect:1 comm:1 complexity:2 fischler:1 dynamic:1 raise:1 solving:1 depend:1 segment:1 myriad:1 purely:3 efficiency:3 basis:1 easily:2 joint:2 k0:6 multiplicand:1 distinct:2 fast:2 effective:2 describe:1 monte:4 aggregate:1 outcome:3 choosing:1 birth:8 whose:2 supplementary:1 drawing:1 ability:1 statistic:5 richardson:1 jointly:2 itself:1 online:3 advantage:1 propose:4 product:5 adaptation:5 relevant:1 hadamard:2 iff:1 lazic:1 saksman:1 convergence:8 regularity:1 r1:3 produce:1 stabilising:1 converges:7 inlier:4 depending:1 illustrate:2 coupling:1 minimises:1 measured:2 nearest:1 school:1 progress:1 eq:10 strong:1 soc:2 auxiliary:1 c:1 involves:4 implies:1 predicted:1 australian:1 concentrate:1 guided:1 inhomogeneous:1 restate:1 correct:3 aperiodic:1 thakoor:1 stochastic:1 kb:1 exploration:1 subsequently:1 australia:1 hartley:2 material:1 require:1 behaviour:1 fix:1 alleviate:1 extension:1 pham:1 underpinning:1 around:1 diminish:1 ground:3 normal:1 exp:2 major:1 adopt:1 a2:1 estimation:4 label:2 visited:1 vulnerability:1 council:1 individually:2 repetition:1 ere:1 minimization:1 clearly:2 gaussian:1 always:1 aim:1 ck:2 reaching:1 nott:1 minc:1 conjunction:1 minimisation:1 derived:1 rank:1 likelihood:1 cartography:1 bernoulli:1 am:2 helpful:1 stopping:2 membership:1 i0:1 inaccurate:2 typically:4 bt:9 sb:1 diminishing:3 accept:1 kc:3 initially:1 a0:2 issue:1 overall:3 arg:1 among:1 development:1 art:1 delong:1 uc:3 mutual:2 optimises:1 equal:1 distilled:1 having:1 sampling:20 identical:1 represents:2 yu:5 look:1 future:1 isack:1 develops:1 few:1 irreducible:1 suter:5 randomly:6 simultaneously:4 preserve:2 replaced:1 geometry:1 replacement:1 attempt:4 freedom:1 detection:2 normalisation:1 message:2 gaffney:1 highly:1 mining:1 weakness:2 mixture:9 yielding:1 inliers:14 chain:13 accurate:1 beforehand:2 closer:1 unless:1 conduct:2 re:1 theoretical:1 minimal:18 fitted:5 instance:4 column:1 increased:1 markovian:2 formulates:1 goodness:1 zn:16 assignment:2 hypothesise:1 tractability:2 cost:2 introducing:1 subset:20 deviation:1 uniform:2 dependency:1 combined:1 st:7 fundamental:5 international:3 minimised:1 homography:1 enhance:1 synthesis:1 again:1 thesis:1 satisfied:2 recorded:1 henceforth:1 worse:1 conf:2 bane:1 inefficient:1 return:1 li:1 exclude:1 de:2 sec:8 int:2 inc:1 satisfy:1 depends:2 later:1 view:10 sup:1 recover:1 maintains:1 sort:1 contribution:1 publicly:1 aperiodicity:1 accuracy:1 variance:1 yield:2 weak:1 identification:1 bayesian:4 vincent:1 produced:1 carlo:4 trajectory:4 ah:4 simultaneous:1 explain:1 multibody:1 manual:1 definition:1 against:1 competitor:1 energy:14 obvious:1 conveys:1 naturally:2 proof:2 propagated:2 sampled:5 stop:1 dataset:8 popular:1 concentrating:1 manifest:1 subsection:1 lim:12 knowledge:1 segmentation:16 schedule:2 actually:1 higher:3 originally:1 planar:2 awkward:1 specify:1 improved:1 rjmcmc:5 arranged:1 done:1 zisserman:1 generality:1 ergodicity:5 stage:10 rejected:2 until:6 correlation:3 hastings:4 wse:1 akm:9 reversible:10 lack:1 incrementally:3 reweighting:1 quality:1 contain:1 true:2 evolution:4 hence:3 andrieu:3 facility:1 symmetric:1 death:6 wp:8 noted:2 percentile:1 criterion:4 chin:5 outline:1 evident:1 tn:5 bolles:1 motion:17 temperature:5 image:3 consideration:2 novel:2 recently:1 invoked:1 boykov:1 superior:2 common:1 qp:11 cambridge:1 automatic:1 consistency:1 lowered:1 similarity:1 add:2 apart:1 certain:1 accomplished:1 seen:1 minimum:4 care:1 somewhat:1 converge:5 aggregated:1 paradigm:1 signal:1 branch:2 multiple:3 relates:2 full:2 technical:2 match:1 faster:1 determination:1 minimising:7 a1:1 ensuring:1 regression:3 optimisation:6 essentially:1 vision:9 iteration:1 represent:1 kernel:4 achieved:1 chicken:1 proposal:22 hurdle:1 annealing:4 xst:2 else:3 interval:1 median:2 jian:1 crucial:4 extra:1 rest:1 south:1 comment:1 member:1 effectiveness:1 jordan:1 concerned:1 ps1:1 automated:1 xj:8 fit:2 anders:1 irreducibility:1 competing:1 suboptimal:1 idea:2 simplifies:1 minimise:1 kohn:2 defense:1 inadequacy:1 colour:3 effort:1 passing:2 cause:1 matlab:1 dramatically:1 generally:2 clear:2 wsz:1 detailed:1 s4:4 clutter:1 statist:2 http:1 exist:1 tutorial:1 s3:3 estimated:2 disjoint:3 per:2 rb:8 rosenthal:2 disallowed:1 promise:1 affected:2 four:2 threshold:2 enormous:1 deleted:1 drawn:1 vast:1 graph:2 asymptotically:1 relaxation:2 sum:3 run:2 inverse:2 homographies:2 putative:1 draw:1 dkm:1 ric:2 bound:2 datum:4 aic:2 convergent:1 correspondence:1 quadratic:2 fan:1 g:19 simulate:4 min:4 according:5 describes:3 across:2 son:1 wi:40 metropolis:6 making:1 s1:6 practicable:1 outlier:19 invariant:1 pr:1 taken:2 remains:1 wsa:1 turn:1 previously:1 mechanism:1 r3:1 eventually:1 encapsulating:1 subjected:1 end:2 available:3 permit:1 apply:2 observe:2 eight:1 away:1 appropriate:2 enforce:1 hierarchical:1 top:2 dirichlet:2 clustering:2 include:1 ensure:3 graphical:2 hinge:1 exploit:2 perturb:1 especially:2 build:1 classical:1 summarises:2 objective:5 move:19 already:1 quantity:2 occurs:1 strategy:2 exhibit:1 subspace:1 distance:1 link:1 thank:1 simulated:4 evaluate:1 considers:1 consensus:4 assuming:1 code:1 index:5 illustration:1 ratio:1 balance:1 equivalently:1 difficult:1 mostly:1 robert:2 rise:1 design:2 boltzmann:1 unknown:4 allowing:1 revised:4 markov:6 datasets:3 jin:2 aarabi:1 situation:1 extended:3 uwo:1 discovered:1 david:1 evidenced:1 namely:2 required:6 pair:4 raising:2 elapsed:1 learned:2 established:3 trans:5 brook:1 beyond:1 usually:2 pattern:11 pioneering:1 optimise:4 green:3 max:1 belief:1 power:1 suitable:1 overlap:2 difficulty:2 ranked:1 residual:7 scheme:5 older:1 picture:1 jun:1 text:1 prior:1 geometric:9 acknowledgement:1 discovery:1 removal:1 multiplication:1 asymptotic:1 embedded:3 fully:1 loss:3 bear:1 permutation:2 generation:3 organised:1 filtering:1 proven:1 generator:4 degree:1 propagates:1 repeat:2 clearing:1 soon:1 last:2 supported:1 allow:1 xs1:1 taking:1 absolute:3 distributed:1 regard:1 overcome:1 dimension:1 calculated:2 valid:6 transition:5 author:2 collection:2 adaptive:26 jump:10 refinement:3 osokin:1 far:2 approximate:1 compact:1 global:4 doucet:2 giordani:1 xi:22 discriminative:1 davidson:1 search:2 iterative:1 table:2 nature:2 learn:1 robust:10 ca:1 obtaining:2 unavailable:1 schuurmans:1 european:1 domain:2 did:1 main:1 csd:1 s2:2 noise:1 arrival:1 quadrianto:1 body:1 fig:11 egg:1 wiley:1 explicit:1 jacobian:1 theorem:3 z0:6 specific:1 sift:1 insightful:1 explored:1 insignificant:1 r2:3 offset:1 concern:1 ih:2 false:2 sequential:1 effectively:2 importance:1 phd:1 execution:1 labelling:3 conditioned:1 chen:1 mf:11 fc:1 simply:2 likely:4 gao:1 visual:1 hitting:1 adjustment:1 vulnerable:1 truth:3 satisfies:4 chance:1 acm:2 viewed:3 goal:3 identity:1 presentation:1 king:1 sampson:1 labelled:1 shared:1 change:4 specifically:1 except:1 uniformly:1 torr:2 sampler:1 called:1 accepted:3 experimental:2 invariance:1 partly:1 indicating:1 select:5 latter:1 adelaide:2 accelerated:1 dept:1 mcmc:19 tested:1 phenomenon:1 |
3,820 | 4,459 | Projection onto A Nonnegative Max-Heap
Jun Liu
Arizona State University
Tempe, AZ 85287, USA
Liang Sun
Arizona State University
Tempe, AZ 85287, USA
Jieping Ye
Arizona State University
Tempe, AZ 85287, USA
[email protected]
[email protected]
[email protected]
Abstract
We consider the problem of computing the Euclidean projection of a vector
of length p onto a non-negative max-heap?an ordered tree where the values of the nodes are all nonnegative and the value of any parent node is no
less than the value(s) of its child node(s). This Euclidean projection plays
a building block role in the optimization problem with a non-negative maxheap constraint. Such a constraint is desirable when the features follow
an ordered tree structure, that is, a given feature is selected for the given
regression/classification task only if its parent node is selected. In this paper, we show that such Euclidean projection problem admits an analytical
solution and we develop a top-down algorithm where the key operation is
to find the so-called maximal root-tree of the subtree rooted at each node.
A naive approach for finding the maximal root-tree is to enumerate all the
possible root-trees, which, however, does not scale well. We reveal several
important properties of the maximal root-tree, based on which we design a
bottom-up algorithm with merge for efficiently finding the maximal roottree. The proposed algorithm has a (worst-case) linear time complexity
for a sequential list, and O(p2 ) for a general tree. We report simulation
results showing the effectiveness of the max-heap for regression with an ordered tree structure. Empirical results show that the proposed algorithm
has an expected linear time complexity for many special cases including a
sequential list, a full binary tree, and a tree with depth 1.
1
Introduction
In many regression/classification problems, the features exhibit certain hierarchical or structural relationships, the usage of which can yield an interpretable model with improved regression/classification performance [25]. Recently, there have been increasing interests on structured sparisty with various approaches for incorporating structures; see [7, 8, 9, 17, 24, 25]
and references therein. In this paper, we consider an ordered tree structure: a given feature
is selected for the given regression/classification task only if its parent node is selected. To
incorporate such ordered tree structure, we assume that the model parameter x ? Rp follows
the non-negative max-heap structure1 :
P = {x ? 0, xi ? xj ?(xi , xj ) ? E t },
(1)
where T t = (V t , E t ) is a target tree with V t = {x1 , x2 , . . . , xp } containing all the nodes and
E t all the edges. The constraint set P implies that if xi is the parent node of a child node
xj then the value of xi is no less than the value of xj . In other words, if a parent node xi is
0, then any of its child nodes xj is also 0. Figure 1 illustrates three special tree structures:
1) a full binary tree, 2) a sequential list, and 3) a tree with depth 1.
1
To deal with the negative model parameters, one can make use of the technique employed
in [24], which solves the scaled version of the least square estimate.
1
x1
x1
x2
x3
x1
x2
x3
x4
x5
x6
x7
x2
x4
x5
x6
x3
x4
x5
x6
x7
x7
(a)
(b)
(c)
Figure 1: Illustration of a non-negative max-heap depicted in (1). Plots (a), (b), and (c) correspond
to a full binary tree, a sequential list, and a tree with depth 1, respectively.
The set P defined in (1) induces the so-called ?heredity principle? [3, 6, 18, 24], which has
been proven effective for high-dimensional variable selection. In a recent study [12], Li et al.
conducted a meta-analysis of 113 data sets from published factorial experiments and concluded that an overwhelming majority of these real studies conform with the heredity principles. The ordered tree structure is a special case of the non-negative garrote discussed in [24]
when the hierarchical relationship is depicted by a tree. Therefore, the asymptotic properties
established in [24] are applicable to the ordered tree structrue. Several related approaches
that can incorporate the ordered tree structure include the Wedge approach [17] and the
hierarchical group Lasso [25]. The Wedge approach incorporates such ordering information
P p x2
by designing a penalty for the model parameter x as ?(x|P ) = inf t?P 21 i=1 ( tii + ti ), with
tree being a sequential list. By imposing the mixed ?1 -?2 norm on each group formed by
the nodes in the subtree of a parent node, the hierarchical group Lasso is able to incorporate such ordering information. The hierarchical group Lasso has been applied for multi-task
learning in [11] with a tree structure, and the efficient computation was discussed in [10, 15].
Compared to Wedge and hierarchical group Lasso, the max-heap in (1) incorporates such
ordering information in a direct way, and our simulation results show that the max-heap
can achieve lower reconstruction error than both approaches.
In estimating the model parameter satisfying the ordered tree structure, one needs to solve
the following constrained optimization problem:
min f (x)
(2)
x?P
for some convex function f (?). The problem (2) can be solved via many approaches including
subgradient descent, cutting plane method, gradient descent, accelerated gradient descent,
etc [19, 20]. In applying these approaches, a key building block is the so-called Euclidean
projection of a vector v onto the convex set P :
1
?P (v) = arg min kx ? vk22 ,
(3)
x?P 2
which ensures that the solution belongs to the constraint set P . For some special set P (e.g.,
hyperplane, halfspace, and rectangle), the Euclidean projection admits a simple analytical
solution, see [2]. In the literature, researchers have developed efficient Euclidean projection
algorithms for the ?1 -ball [5, 14], the ?1 /?2 -ball [1], and the polyhedra [4, 22]. When P is
induced by a sequential list, a linear time algorithm was recently proposed in [26]. Without
the non-negative constraints, problem (3) is the so-called isotonic regression problem [16, 21].
Our major technical contribution in this paper is the efficient computation of (3) for the set
P defined in (1). In Section 2, we show that the Euclidean projection admits an analytical
solution, and we develop a top-down algorithm where the key operation is to find the
so-called maximal root-tree of the subtree rooted at each node. In Section 3, we design
a bottom-up algorithm with merge for efficiently finding the maximal root-tree by using
its properties. We provide empirical results for the proposed algorithm in Section 4, and
conclude this paper in Section 5.
2
Atda: A Top-Down Algorithm
In this section, we develop an algorithm in a top-down manner called Atda for solving (3).
With the target tree T t = (V t , E t ), we construct the input tree T = (V, E) with the input
vector v, where V = {v1 , v2 , . . . , vp } and E = {(vi , vj )|(xi , xj ) ? E t }. For the convenience
of presenting our proposed algorithm, we begin with several definitions. We also provide
some examples for elaborating the definitions in the supplementary file A.1.
2
Definition 1. For a non-empty tree T = (V, E), we define its root-tree as any non-empty
? that satisfies: 1) V? ? V , 2) E
? ? E, and 3) T? shares the same root as T .
tree T? = (V? , E)
Definition 2. For a non-empty tree T = (V, E), we define R(T ) as the root-tree set containing all its root-trees.
Definition 3. For a non-empty tree T = (V, E), we define
P
vi ?V vi
,0 ,
m(T ) = max
|V |
(4)
which equals the mean of all the nodes in T if such mean is non-negative, and 0 otherwise.
Definition 4. For a non-empty tree T = (V, E), we define its maximal root-tree as:
Mmax (T ) = arg
where
max
? T? ?R(T ),m(T? )=mmax (T )
T? =(V? ,E):
mmax (T ) = max m(T?)
T? ?R(T )
|V? |,
(5)
(6)
is the maximal value of all the root-trees of the tree T . Note that if two root-trees share the
same maximal value, (5) selects the one with the largest tree size.
? is a part of a ?larger? tree T = (V, E), i.e., V? ? V and E
? ? E, we
When T? = (V? , E)
?
?
can treat T as a ?super-node? of the tree T with value m(T ). Thus, we have the following
definition of a super-tree (note that a super-tree provides a disjoint partition of the given
tree):
Definition 5. For a non-empty tree T = (V, E), we define its super-tree as S = (VS , ES ),
which satisfies: 1) each node inTVS = {T1 , T2 , . . . , Tn } isSa non-empty tree with Ti = (Vi , Ei ),
n
2) Vi ? V and Ei ? E, 3) Vi Vj = ?, i 6= j and V = i=1 Vi , and 4) (Ti , Tj ) ? ES if and
only if there exists a node in Tj whose parent node is in Ti .
2.1
Proposed Algorithm
We present the pseudo code for solving (3) in Algorithm 1. The key idea of the proposed
algorithm is that, in the i-th call, we find Ti = Mmax (T ), the maximal root-tree of T , set
x
? corresponding to the nodes of Ti to mi = mmax (T ) = m(Ti ), remove Ti from the tree T ,
and apply Atda to the resulting trees one by one recursively.
Algorithm 1 A Top-Down Algorithm: Atda
Input: the tree structure T = (V, E), i
Output: x
? ? Rp
1: Set i = i + 1
2: Find the maximal root-tree of T , denoted by Ti = (Vi , Ei ), and set mi = m(Ti )
3: if mi > 0 then
4:
Set x
?j = mi , ?vj ? Vi
5:
Remove the root-tree Ti from T , denote the resulting trees as T?1 , T?2 , . . . , T?ri , and
apply Atda(T?j ,i), ?j = 1, 2, . . . , ri
6: else
7:
Set x
?j = mi , ?vj ? Vi
8: end if
2.2
Illustration & Justification
For a better illustration and justification of the proposed algorithm, we provide the analysis
of Atda for a special case?the sequential list?in the supplementary file A.2.
Let us analyze Algorithm 1 for the general tree. Figure 2 illustrates solving (3) via Algorithm 1 for a tree with depth 3. Plot (a) shows a target tree T t , and plot (b) denotes the
input tree T . The dashed frame of plot (b) shows Mmax (T ), the maximal root-tree of T , and
3
1
x1
x2
x5
x3
x6
x12
x7
x8
x13
5
x4
x9
x14
x10
x11
-1
-4
1
x15
3
2
1
3
1
1
5
-1
1
2
0
3
-4
0
0
1
-4
-1
-1
2
-4
1
2
-1
2
2
4
(b)
(c)
3
1
1
-1
2
0
0
5
2
4
1
0
0 0
-1
-4
2
2
2
-1
2
(a)
2
-4
2
3
4
(f)
1
1
1
-1
0
2
3
5
1
3
-4
0
0
0
0
2
0
1
1
0
-1
-4
2
-1
2
1
-1
1
0
0
1
1
1
(e)
0
2
4
2
(d)
Figure 2: Illustration of Algorithm 1 for solving (3) for a tree with depth 3. Plot (a) shows the
target tree T t , and plots (b-e) illustrate Atda. Specifically, plot (b) denotes the input tree T ,
with the dashed frame displaying its maximal root-tree; plot (c) depicts the resulting trees after
removing the maximal root-tree in plot (b); plot (d) shows the resulting super-tree (we treat each
tree enclosed by the dashed frame as a super-node) of the algorithm; plot (e) gives the solution
x
? ? R15 ; and the edges of plot (f) show the dual variables, from which we can also obtain the
optimal solution x
? (refer to the proof of Theorem 1).
we have Mmax (T ) = 3. Thus, we set the corresponding entries of x
? to 3. Plot (c) depicts
the resulting trees after removing the maximal root-tree in plot (b), and plot (d) shows the
generated maximal root-trees (enclosed by dashed frame) by the algorithm. When treating
each generated maximal root-tree as a super-node with the value defined in Definition 3,
plot (d) is a super-tree of the input tree T . In addition, the super-tree is a max-heap, i.e.,
the value of the parent node is no less than the values of its child nodes. Plot (e) gives the
? ? R15 . The edges of plot (f) correspond to the values of the dual variables, from
solution x
? ? R15 . Finally, we can observe that the
which we can also obtain the optimal solution x
non-zero entries of x
? constitute a cut of the original tree.
We verify the correctness of Algorithm 1 for the general tree in the following theorem. We
make use of the KKT conditions and variational inequality [20] in the proof.
Theorem 1. x
? = Atda(T, 0) provides the unique optimal solution to (3).
Proof: As the objective function of (3) is strictly convex and the constraints are affine, it
admits a unique solution. After running Algorithm 1, we obtain the sequences {Ti }ki=1 and
{mi }ki=1 , where k satisfies 1 ? k ? p. It is easy to verify that the trees Ti , i = 1, 2, . . . , k
constitute a disjoint partition of the input tree T . With the sequences {Ti }ki=1 and {mi }ki=1 ,
we can construct a super-tree of the input tree T as follows: 1) we treat Ti as a super-node
with value mi , and 2) we put an edge between Ti and Tj if there is an edge between the
nodes of Ti and Tj in the input tree T . With Algorithm 1, we can verify that the resulting
super-tree has the property that the value of the parent node is no less than its child nodes.
Therefore, x
? = Atda(T, 0) satisfies x
? ? P.
Let xl and vl denote a subset of x and v corresponding to the indices appearing in the
subtree Tl , respectively. Denote P l = {xl : xl ? 0, xi ? xj , (vi , vj ) ? El }, I1 = {l : ml >
0}, I2 = {l : ml = 0}. Our proof is based on the following inequality:
X
X
1
1
1
min kx ? vk22 ?
(7)
min kxl ? vl k22 +
min kxl ? vl k22 ,
x?P 2
xl ?P l 2
xl ?P l 2
l?I1
l?I2
which holds as the left hand side has the additional inequality constraints compared to the
right hand side. Our methodology is to show that x
? = Atda(T, 0) provides the optimal
solution to the right hand side of (7), i.e.,
x
?l = arg min
1 l
kx ? vl k22 , ?l ? I1 ,
2
(8)
x
?l = arg min
1 l
kx ? vl k22 , ?l ? I2 ,
2
(9)
xl ?P l
xl ?P l
4
which, together with the fact 12 k?
? ? P lead to our main
x ? vk22 ? minx?P 21 kx ? vk22 , x
argument. Next, we prove (8) by the KKT conditions, and prove (9) by the variational
inequality [20].
Firstly, ?l ? I1 , we introduce the dual variable yij for the edge (vi , vj ) ? El , and yii if
vi ? Ll , where Ll contains all the leaf nodes of the tree Tl . Denote the root of Tl by vrl .
For all vi ? Vl , vi 6= vrl , we denote its parent node by vji , and for the root vrl , we denote
jrl = rl . We let
Cil = {j|vj is a child node of vi in the tree Tl }.
Ril = {j|vj is in the subtree of Tl rooted at vi }.
?
To prove (8), we verify that the primal variable x
? = Atda(T, 0) and the dual variable y
satisfy the following KKT conditions:
?(vi , vj ) ? El , x
?i ? x
?j
?(vi , vj ) ? El , (?
xi ? x
?j )?
yij
?vi ? Ll , y?ii x
?i
X
?vi ? Vl , x
? i ? vi ?
y?ij + y?ji i
?
=
=
0
0
0
(10)
(11)
(12)
=
0
(13)
?(vi , vj ) ? El , y?ij
?vi ? Ll , y?ii
?
?
0
0,
(14)
(15)
j?Cil
where y?jrl rl = 0 (Note that y?jrl rl is a dual variable, and it is introduced for the simplicity
? is set as:
of presenting (12)), and the dual variable y
y?ji i
y?ii = 0, ?i ? Ll ,
X
y?ij , ?vi ? Vl .
= v i ? ml +
(16)
(17)
j?Cil
According to Algorithm 1, x
?i = ml > 0, ?vi ? Vl , l ? I1 . Thus, we have (10)-(12) and (15).
It follows from (17) that (13) holds. According to (16) and (17), we have
X
y?ji i =
vj ? |Ril |ml , ?vi ? Vl ,
(18)
j?Ril
of Tl rooted at vi . From
where |Ril | denotes the number of elements in Ril , the subtree
P
the nature of the maximal root-tree Tl , l ? I1 , we have j?Rl vj ? |Ril |ml . Otherwise, if
i
P
l
?
j?Ril vj < |Ri |ml , we can construct from Tl a new root-tree Tl by removing the subtree
of Tl rooted at vi , so that T?l achieves a larger value than Tl . This contradicts with the
argument that Tl , l ? I1 is the maximal root-tree of the working tree T . Therefore, it
follows from (18) that (14) holds.
Secondly, we prove (9) by verifying the following optimality condition:
hxl ? x
?l , x
?l ? vl i ? 0, ?xl ? P l , l ? I2 ,
(19)
l
which is the so-called variational inequality condition for x
? being the optimal solution to (9).
According to Algorithm 1, if l ? I2 , we have x
?i = 0, ?vi ? Vl . Thus, (19) is equivalent to
hxl , vl i ? 0, ?xl ? P l , l ? I2 .
(20)
For a given xl ? P l , if xi = 0, ?vi ? V l , (20) naturally holds. Next, we consider xl 6= 0.
Denote by x
?l1 the minimal nonzero element in xl , and Tl1 = (Vl1 , El1 ) a tree constructed by
removing the nodes corresponding to the indices in the set {i : xli = 0, vi ? VP
l } from Tl . It is
clear that Tl1 shares the same root as Tl . It follows from Algorithm 1 that i:vi ?V 1 vi ? 0.
l
Thus, we have
X
X
X
hxl , vl i = x
?l1
vi +
(xi ? x
?l1 )vi ?
(xi ? x
?l1 )vi .
i:vi ?Vl1
i:vi ?Vl1
i:vi ?Vl1
5
?lr the minimal
If xli = x
?l1 , ?vi ? Vl1 , we arrive at (20). Otherwise, we set r = 2; denote by x
Pr?1 l
r?1
r
r
nonzero element in the set {xi ? j=1 x
?j : vi ? Vl }, and Tl = (Vl , Elr ) a subtree of
Pr?1 l
r?1
Tl
by removing those nodes with the indices in the set {i : xli ? j=1 x
?j = 0, vi ? Vlr?1 }.
It is clear that TlrPshares the same root as Tlr?1 and Tl as well, so that it follows from
Algorithm 1 that i:vi ?V r vi ? 0. Therefore, we have
l
X
i:vi ?Vlr?1
(xi ?
r?1
X
j=1
x
?lj )vi = x
?lr
X
vi +
i:vi ?Vlr
Repeating the above process until
Vlr
X
(xi ?
i:vi ?Vlr
r
X
j=1
x
?lj )vi ?
X
i:vi ?Vlr
(xi ?
r
X
x
?lj )vi . (21)
j=1
is empty, we can verify that (20) holds.
For a better understanding of the proof, we make use of the edges of Figure 2 (f) to show
the dual variables, where the edge connecting vi and vj corresponds to the dual variable y?ij ,
and the edge starting from the leaf node vi corresponds to the dual variable y?ii . With the
dual variables, we can compute x
? via (13). We note that, for the maximal root-tree with a
positive value, the associated dual variables are unique, but for the maximal root-tree with
zero value, the associated dual variables may not be unique. For example, in Figure 2 (f),
we set y?ii = 1 for i = 12, y?ii = 0 for i = 13, y?ij = 2 for i = 6, j = 12, and y?ij = 2 for
i = 6, j = 13. It is easy to check that the dual variables can also be set as follows: y?ii = 0
for i = 12, y?ii = 1 for i = 13, y?ij = 1 for i = 6, j = 12, and y?ij = 3 for i = 6, j = 13.
3
Finding the Maximal Root-Tree
A key operation of Algorithm 1 is to find the maximal root-tree used in Step 2. A naive
approach for finding the maximal root-tree of a tree T is to enumerate all possible roottrees in the root-tree set R(T ), and identify the maximal root-tree via (5). We call such
an approach Anae, which stands for a naive algorithm with enumeration. Although Anae
is simple to describe, it has a very high time complexity (see the analysis given in supplementary file A.3). To this end, we develop Abuam (A Bottom-Up Algorithm with Merge).
The underlying idea is to make use of the special structure of the maximal root-tree defined
in (5) for avoiding the enumeration of all possible root-trees.
We begin the discussion with some key properties of the maximal root-tree, and the proof
is given in the supplementary file A.4.
Lemma 1. For a non-empty tree T = (V, E), denote its maximal root-tree as Tmax =
? be a root-tree of Tmax . Assume that there are n nodes
(Vmax , Emax ). Let T? = (V? , E)
/ V? , 2) vij ? V , and 3) the parent node of vij is in
vi1 , . . . , vin , which satisfy: 1) vij ?
?
V . If n ? 1, we denote the subtree of T rooted at vij as T j = (V j , E j ), j = 1, 2, . . . , n,
j
j
j
j
Tmax
= (Vmax
, Emax
) as the maximal root-trees of T j , and m
? = maxj=1,2,...,n m(Tmax
).
Then, the followings hold: (1) If n = 0, then Tmax = T? = T ; (2) If n ? 1, m(T?) = 0, and
m
? = 0, then Tmax = T ; (3) If n ? 1, m(T?) > 0, and m(T?) > m,
? then Tmax = T?; (4) If
j
j
n ? 1, m(T?) > 0, and m(T?) ? m,
? then Vmax
? Vmax , Emax
? Emax and (vi0 , vij ) ? Emax ,
j
j
j
?
? Vmax , Emax
) = m;
? and (5) If n ? 1, m(T?) = 0, and m
? > 0, then Vmax
?j : m(Tmax
j
?
Emax and (vi0 , vij ) ? Emax , ?j : m(Tmax ) = m.
For the convenience of presenting our proposed algorithm, we define the operation ?merge?
as follows:
Definition 6. Let T = (V, E) be a non-empty tree, and T1 = (V 1 , E 1 ) and T2 = (V 2 , E 2 )
be two trees that satisfy: 1) they are composed of a subset of the nodes and edges
of T , i.e.,
T
V1T
? V , V 2 ? V , E 1 ? E, and E 2 ? E; 2) they do not overlap, i.e., V 1 V 2 = ?, and
E 1 E 2 = ?; and 3) in the tree T , vi2 , the root node of T2 is a child of vi1 , a leaf node
?
?
? ?
of T1 . We
S define the operation
S
S?merge? as T = merge(T1 , T2 , T ), where T = (V , E) with
V = V1 V2 and E = E1 E2 {(vi1 , vi2 )}.
Next, we make use of Lemma 1 to efficiently compute the maximal root-tree, and present
the pseudo code for Abuam in Algorithm 2. We provide the illustration of the proposed
algorithm and the analysis of its computational cost in the supplementary file A.5 and A.6,
respectively.
6
Algorithm 2 A Bottom-Up Algorithm with Merge: Abuam
Input: the input tree T = (V, E)
Output: the maximal root-tree Tmax = (Vmax , Emax )
1: Set T0 = (V0 , E0 ), where V0 = {xi0 } and E0 = ?
2: if vi0 does not have a child node in T then
3:
Set Tmax = T0 , return
4: end if
5: while 1 do
/ V0 , 2) vij ? V ,
6:
Set m
? = 0, denote by vi1 , . . . , vin the n nodes that satisfy: 1) vij ?
and 3) the parent node of vij is in V0 , and denote by T j = (V j , E j ), j = 1, 2, . . . , n
the subtree of T rooted at vij .
7:
if n = 0 then
8:
Set Tmax = T0 = T , return
9:
end if
10:
for j = 1 to n do
j
j
), m)
?
= Abuam(T j ), and m
? = max(m(Tmax
11:
Set Tmax
12:
end for
13:
if m(T0 ) = m
? = 0 then
14:
Set Tmax = T , return
15:
else if m(T?) > 0 and m(T?) > m
? then
16:
Set Tmax = T0 , return
17:
else
j
j
)=m
?
, T ), ?j : m(Tmax
18:
Set T0 =merge(T0 , Tmax
19:
end if
20: end while
Making use of the fact that T0 is always a valid root-tree of Tmax , the maximal root-tree of
T , we can easily prove the following result using Lemma 1.
Theorem 2. Tmax returned by Algorithm 2 is the maximal root-tree of the input tree T .
4
Numerical Simulations
Effectiveness of the Max-Heap Structure We test the effectiveness of the max-heap
structure for linear regression b = Ax, following the same experimental setting as in [17].
Specifically, the elements of A ? Rn?p are generated i.i.d. from the Gaussian distribution
with zero mean and standard derivation and the columns of A are then normalized to have
unit length. The regression vector x has p = 127 nonincreasing elements, where the first
10 elements are set as x?i = 11 ? i, i = 1, 2, . . . , 10 and the rest are zeros. We compared
with the following three approaches: Lasso [23], Group Lasso [25], and Wedge [17]. Lasso
makes no use of such ordering, while Wedge incorporates the structure by using an auxiliary
ordered variable. For Group Lasso and Max-Heap, we try binary-tree grouping and list-tree
grouping, where the associated trees are a full binary tree and a sequential list, respectively.
The regression vector is put on the tree so that, the closer the node to the root, the larger
the element is placed. In Group Lasso, the nodes appearing in the same subtree form a
group. For the compared approaches, we use the implementations provided in [17]2 ; and for
Max-Heap, we solve (2) with f (x) = 12 kAx?bk22 +?kxk1 for some small ? = r?kAT bk? (we
set r = 10?4 , and 10?8 for the binary-tree grouping and list-tree grouping, respectively) and
apply the accelerated gradient descent [19] approach with our proposed Euclidean projection.
We compute the average model error kx ? x? k2 over 50 independent runs, and report the
results with a varying number of sample size n in Figure 3 (a) & (b). As expected, GL-binary,
MH-binary, Wedge, GL-list and MH-list outperform Lasso which does not incorporate such
ordering information. MH-binary performs better than GL-binary, and MH-list performs
better than Wedge and GL-list, due to the direct usage of such ordering information. In
addition, the list-tree grouping performs better than the binary-tree grouping, as it makes
better usage of such ordering information.
2
http://www.cs.ucl.ac.uk/staff/M.Pontil/software/sparsity.html
7
450
250
200
150
0
10
Computational Time
0
Computational Time
300
?1
10
?2
10
?3
10
100
Gaussian Distribution, Full Binary Tree
1
10
sequential list
full binary tree
tree of depth 1
10
350
Model error
Gaussian Distribution for v
1
10
Lasso
GL?binary
MH?binary
400
d=10
d=12
d=14
d=18
d=18
d=20
?1
10
?2
10
?3
10
?4
10
50
0
12
?5
15
18
20
25
Sample size
30
40
50
10
?4
4
5
10
10
6
10
10
(a)
(c)
120
0
10
10
Computational Time
40
Uniform Distribution, Full Binary Tree
1
sequential list
full binary tree
tree of depth 1
Computational Time
Model error
60
100
10
0
80
20
40
60
80
Random Initialization of v
(e)
Uniform Distribution for v
2
10
Wedge
GL?list
MH?list
100
?2
10
?4
10
d=10
d=12
d=14
d=18
d=18
d=20
?1
10
?2
10
?3
10
20
0
12
0
p
?6
15
18
20
25
Sample size
(b)
30
40
50
10
?4
4
5
10
10
p
(d)
6
10
10
0
20
40
60
80
Random Initialization of v
100
(f)
Figure 3: Simulation results. In plots (a) and (b), we show the average model error kx ? x? k2
over 50 independet runs by different approaches with the full binary-tree ordering and the list-tree
ordering. In plots (c) and (d), we report the computational time (in seconds) of the proposed Atda
(averaged over 100 runs) with different randomly initialized input v. In plots (e) and (f), we show
the computational time of Atda over 100 runs.
Efficiency of the Proposed Projection We test the efficiency of the proposed Atda
approach for solving the Euclidean projection onto the non-negative max-heap, equipped
with our proposed Abuam approach for finding the maximal root-trees. In the experiments,
we make use of the three tree structures as depicted in Figure 1, and try two different
distributions: 1) Gaussian distribution with zero mean and standard derivation and 2)
uniform distribution in [0, 1] for randomly and independently generating the entries of the
input v ? Rp . In Figure 3 (c) & (d), we report the average computational time (in seconds)
over 100 runs under different values of p = 2d+1 ? 1, where d = 10, 12, . . . , 20. We can
observe that, the proposed algorithm scales linearly with the size of p. In Figure 3 (e) & (f),
we report the computational time of Atda over 100 runs when the ordered tree structure is
a full binary tree. The results show that the computational time of the proposed algorithm
is relatively stable for different runs, especially for larger d or p. Note that, the source codes
for our proposed algorithm have been included in the SLEP package [13].
5
Conclusion
In this paper, we have developed an efficient algorithm for the computation of the Euclidean
projection onto a non-negative max-heap. The proposed algorithm has a (worst-case) linear
time complexity for a sequential list, and O(p2 ) for a general tree. Empirical results show
that: 1) the proposed approach deals with the ordering information better than existing
approaches, and 2) the proposed algorithm has an expected linear time complexity for the
sequential list, the full binary tree, and the tree of depth 1. It will be interesting to explore
whether the proposed Abuam has a worst case linear (or linearithmic) time complexity for
the binary tree. We plan to apply the proposed algorithms to real-world applications with
an ordered tree structure. We also plan to extend our proposed approaches to the general
hierarchical structure.
Acknowledgments
This work was supported by NSF IIS-0812551, IIS-0953662, MCB-1026710, CCF-1025177, NGA
HM1582-08-1-0016, and NSFC 60905035, 61035003.
8
References
[1] E. Berg, M. Schmidt, M. P. Friedlander, and K. Murphy. Group sparsity via linear-time
projection. Tech. Rep. TR-2008-09, Department of Computer Science, University of British
Columbia, Vancouver, July 2008.
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[3] N. Choi, W. Li, and J. Zhu. Variable selection with the strong heredity constraint and its
oracle property. Journal of the American Statistical Association, 105:354?364, 2010.
[4] Z. Dost?
al. Box constrained quadratic programming with proportioning and projections. SIAM
Journal on Optimization, 7(3):871?887, 1997.
[5] J. Duchi, S. Shalev-Shwartz, Y. Singer, and C. Tushar. Efficient projection onto the ?1 -ball
for learning in high dimensions. In International Conference on Machine Learning, 2008.
[6] M. Hamada and C. Wu. Analysis of designed experiments with complex aliasing. Journal of
Quality Technology, 24:130?137, 1992.
[7] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In International
Conference on Machine Learning. 2009.
[8] L. Jacob, G. Obozinski, and J. Vert. Group lasso with overlap and graph lasso. In International
Conference on Machine Learning, 2009.
[9] R. Jenatton, J.-Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing
norms. Technical report, arXiv:0904.3523v2, 2009.
[10] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical
dictionary learning. In International Conference on Machine Learning, 2010.
[11] S. Kim and E. P. Xing. Tree-guided group lasso for multi-task regression with structured
sparsity. In International Conference on Machine Learning, 2010.
[12] X. Li, N. Sundarsanam, and D. Frey. Regularities in data from factorial experiments. Complexity, 11:32?45, 2006.
[13] J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections. Arizona State
University, 2009.
[14] J. Liu and J. Ye. Efficient Euclidean projections in linear time. In International Conference
on Machine Learning, 2009.
[15] J. Liu and J. Ye. Moreau-yosida regularization for grouped tree structure learning. In Advances
in Neural Information Processing Systems, 2010.
[16] R. Luss, S. Rosset, and M. Shahar. Decomposing isotonic regression for efficiently solving large
problems. In Advances in Neural Information Processing Systems, 2010.
[17] C. Micchelli, J. Morales, and M. Pontil. A family of penalty functions for structured sparsity.
In Advances in Neural Information Processing Systems 23, pages 1612?1623. 2010.
[18] J. Nelder. The selection of terms in response-surface models?how strong is the weak-heredity
principle? Annals of Applied Statistics, 52:315?318, 1998.
[19] A. Nemirovski. Efficient methods in convex programming. Lecture Notes, 1994.
[20] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 2004.
[21] P. M. Pardalos and G. Xue. Algorithms for a class of isotonic regression problems. Algorithmica,
23:211?222, 1999.
[22] S. Shalev-Shwartz and Y. Singer. Efficient learning of label ranking by soft projections onto
polyhedra. Journal of Machine Learning Research, 7:1567?1599, 2006.
[23] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society Series B, 58(1):267?288, 1996.
[24] M. Yuan, V. R. Joseph, and H. Zou. Structured variable selection and estimation. Annals of
Applied Statistics, 3:1738?1757, 2009.
[25] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and
hierarchical variable selection. Annals of Statistics, 37(6A):3468?3497, 2009.
[26] L.W. Zhong and J.T. Kwok. Efficient sparse modeling with automatic feature grouping. In
International Conference on Machine Learning, 2011.
9
| 4459 |@word version:1 norm:2 vi1:4 simulation:4 jacob:1 tr:1 recursively:1 liu:5 contains:1 series:1 elaborating:1 existing:1 numerical:1 partition:2 remove:2 plot:21 interpretable:1 treating:1 designed:1 v:1 asu:3 selected:4 leaf:3 plane:1 el1:1 lr:2 provides:3 node:44 firstly:1 zhang:1 constructed:1 direct:2 yuan:1 prove:5 introductory:1 introduce:1 manner:1 expected:3 multi:2 aliasing:1 v1t:1 overwhelming:1 enumeration:2 equipped:1 increasing:1 begin:2 estimating:1 underlying:1 provided:1 developed:2 finding:6 pseudo:2 ti:17 scaled:1 k2:2 uk:1 unit:1 t1:4 positive:1 frey:1 treat:3 nsfc:1 tempe:3 merge:8 tmax:20 therein:1 initialization:2 nemirovski:1 averaged:1 unique:4 acknowledgment:1 block:2 x3:4 kat:1 pontil:2 empirical:3 vert:1 projection:18 boyd:1 word:1 composite:1 onto:7 convenience:2 selection:7 put:2 vrl:3 applying:1 isotonic:3 www:1 equivalent:1 jieping:2 starting:1 independently:1 convex:6 simplicity:1 emax:9 vandenberghe:1 rocha:1 x14:1 justification:2 annals:3 target:4 play:1 programming:2 designing:1 element:7 satisfying:1 cut:1 bottom:4 role:1 kxk1:1 solved:1 verifying:1 worst:3 ensures:1 sun:2 ordering:10 yosida:1 complexity:7 nesterov:1 solving:6 efficiency:2 easily:1 mh:6 various:1 derivation:2 effective:1 describe:1 shalev:2 whose:1 supplementary:5 solve:2 larger:4 otherwise:3 statistic:3 sequence:2 analytical:3 ucl:1 reconstruction:1 maximal:34 yii:1 achieve:1 inducing:1 az:3 parent:12 empty:10 regularity:1 generating:1 illustrate:1 develop:4 ac:1 ij:8 strong:2 solves:1 auxiliary:1 c:1 p2:2 implies:1 wedge:8 guided:1 pardalos:1 secondly:1 yij:2 strictly:1 hold:6 major:1 achieves:1 dictionary:1 heap:14 estimation:1 applicable:1 label:1 largest:1 grouped:2 correctness:1 always:1 gaussian:4 super:12 shrinkage:1 zhong:1 varying:1 ax:1 polyhedron:2 check:1 tech:1 kim:1 el:5 vl:16 lj:3 selects:1 i1:7 arg:4 classification:4 x11:1 dual:13 denoted:1 html:1 plan:2 constrained:2 special:6 equal:1 construct:3 x4:4 vl1:5 yu:1 tl1:2 report:6 t2:4 randomly:2 composed:1 murphy:1 maxj:1 algorithmica:1 interest:1 primal:1 tj:4 nonincreasing:1 edge:10 closer:1 vi0:3 tree:145 euclidean:11 initialized:1 e0:2 minimal:2 column:1 soft:1 modeling:1 cost:1 entry:3 subset:2 uniform:3 conducted:1 slep:2 proximal:1 rosset:1 xue:1 international:7 siam:1 together:1 connecting:1 x9:1 containing:2 huang:1 american:1 zhao:1 return:4 li:3 tii:1 issa:1 satisfy:4 audibert:1 vi:55 ranking:1 root:50 try:2 analyze:1 xing:1 vin:2 halfspace:1 contribution:1 vk22:4 square:1 formed:1 efficiently:4 correspond:2 yield:1 identify:1 vp:2 hxl:3 xli:3 metaxas:1 weak:1 lu:1 researcher:1 published:1 definition:10 e2:1 naturally:1 proof:6 mi:8 associated:3 x13:1 jenatton:2 follow:1 x6:4 methodology:1 improved:1 response:1 box:1 until:1 hand:3 working:1 ei:3 quality:1 reveal:1 usage:3 usa:3 ye:5 verify:5 k22:4 building:2 ccf:1 normalized:1 regularization:1 ril:7 nonzero:2 i2:6 deal:2 mmax:7 x5:4 ll:5 rooted:7 elr:1 presenting:3 tn:1 performs:3 l1:5 duchi:1 jrl:3 variational:3 recently:2 vlr:6 ji:4 rl:4 discussed:2 xi0:1 extend:1 kluwer:1 association:1 refer:1 bk22:1 cambridge:1 imposing:1 heredity:4 automatic:1 stable:1 surface:1 v0:4 etc:1 recent:1 inf:1 belongs:1 certain:1 meta:1 binary:21 inequality:5 rep:1 shahar:1 additional:1 staff:1 employed:1 dashed:4 ii:10 july:1 full:11 desirable:1 x10:1 technical:2 academic:1 bach:2 e1:1 kax:1 regression:13 basic:1 arxiv:1 addition:2 else:3 source:1 concluded:1 publisher:1 rest:1 file:5 induced:1 incorporates:3 effectiveness:3 call:2 structural:1 easy:2 xj:7 lasso:15 idea:2 t0:8 whether:1 penalty:3 returned:1 constitute:2 enumerate:2 clear:2 factorial:2 repeating:1 induces:1 http:1 outperform:1 nsf:1 disjoint:2 tibshirani:1 conform:1 group:12 key:6 rectangle:1 v1:2 graph:1 subgradient:1 nga:1 run:7 package:1 arrive:1 family:2 wu:1 garrote:1 ki:4 quadratic:1 arizona:4 tlr:1 nonnegative:2 oracle:1 hamada:1 constraint:8 x2:6 ri:3 r15:3 software:1 x7:4 argument:2 min:7 optimality:1 x12:1 relatively:1 structured:6 department:1 according:3 ball:3 contradicts:1 joseph:1 making:1 vji:1 pr:2 singer:2 end:7 operation:5 decomposing:1 apply:4 observe:2 hierarchical:9 v2:3 kwok:1 appearing:2 schmidt:1 rp:3 original:1 top:5 denotes:3 include:1 running:1 especially:1 society:1 micchelli:1 objective:1 exhibit:1 gradient:3 minx:1 majority:1 length:2 code:3 index:3 relationship:2 illustration:5 liang:2 negative:10 design:2 implementation:1 linearithmic:1 descent:4 frame:4 rn:1 introduced:1 bk:1 established:1 able:1 sparsity:6 kxl:2 max:18 including:2 vi2:2 royal:1 overlap:2 zhu:1 technology:1 x8:1 jun:1 naive:3 columbia:1 literature:1 understanding:1 friedlander:1 vancouver:1 asymptotic:1 lecture:2 mixed:1 interesting:1 proven:1 enclosed:2 affine:1 xp:1 principle:3 displaying:1 vij:10 share:3 morale:1 course:1 placed:1 gl:6 supported:1 side:3 hm1582:1 absolute:1 sparse:3 moreau:1 depth:8 dimension:1 stand:1 valid:1 world:1 vmax:7 cutting:1 ml:7 kkt:3 mairal:1 conclude:1 nelder:1 xi:15 shwartz:2 nature:1 complex:1 zou:1 vj:15 main:1 linearly:1 child:8 x1:5 tl:17 depicts:2 cil:3 xl:12 x15:1 down:5 removing:5 theorem:4 british:1 choi:1 showing:1 list:22 admits:4 grouping:7 incorporating:1 exists:1 sequential:12 subtree:11 illustrates:2 kx:7 depicted:3 explore:1 ordered:12 corresponds:2 satisfies:4 obozinski:2 included:1 specifically:2 hyperplane:1 tushar:1 lemma:3 called:7 e:2 experimental:1 berg:1 accelerated:2 incorporate:4 mcb:1 avoiding:1 |
3,821 | 446 | Decoding of Neuronal Signals in Visual Pattern
Recognition
Emad N Eskandar
Laboratory of Neuropsychology
National Institute of Mental Health
Bethesda MD 20892 USA
Barry J Richmond
Laboratory of Neuropsychology
National Institute of Mental Health
Bethesda MD 20892 USA
John A Hertz
NORDITA
B1egdamsvej 17
DK-2100 Copenhagen 0, Denmark
Lance M Optican
Laboratory of Sensorimotor Research
National Eye Institute
Bethesda MD 20892 USA
Troels Kjmr
NORDITA
B1egdamsvej 17
DK-2100 Copenhagen 0, Denmark
Abstract
We have investigated the properties of neurons in inferior temporal (IT)
cortex in monkeys performing a pattern matching task. Simple backpropagation networks were trained to discriminate the various stimulus
conditions on the basis of the measured neuronal signal. We also trained
networks to predict the neuronal response waveforms from the spatial patterns of the stimuli. The results indicate t.hat IT neurons convey temporally encoded information about both current and remembered patterns,
as well as about their behavioral context.
356
Decoding of Neuronal Signals in Visual Pattern Recognition
1
INTRODUCTION
Anatomical and neurophysiological studies suggest that there is a cortical pathway
specialized for visual object recognition, beginning in the primary visual cortex
and ending in the inferior temporal (IT) cortex (Ungerleider and Mishkin, 1982).
Studies of IT neurons in awake behaving monkeys have found that visually elicited
responses depend on the pattern of the stimulus and on the behavioral context of
the stimulus presentation (Richmond and Sato, 1987; Miller et aI, 1991). Until now,
however, no attempt had been made to quantify the temporal pattern of firing in
the context of a behaviorally complex task such as pattern recognition.
Our goal was to examine the information present in IT neurons about visual stimuli
and their behavioral context. We explicitly allowed for the possibility that this
information was encoded in the temporal pattern of the response. To decode the
responses, we used simple feed-forward networks trained by back propagation.
In work reported elsewhere (Eskandar et al, 1991) this information is calculated
another way, with similar results.
2
THE EXPERIMENT
Two monkeys were trained to perform a sequent.ial nonmatch to sample task using
a complete set of 32 black-and-white patterns based on 2-D Walsh functions. \\'hile
the monkey fixated and grasped a bar, a sample pattern appeared for 352 msecs;
after a pause of 500 msecs a test stimulus appeared for 352 msecs. The monkey
indicated whether the test stimulus failed to match the sample stimulus by releasing
the bar. (If the test matched the stimulus, the monkey waited for a third stimulus,
different from the sample, before releasing the bar; see Fig. 1.)
SAMPLE
MATCH
~
~----------~~----------~,
352 ms
550 ms
SAMPLE
352 ms
550 ms
-----------_ .
REWARD
NON MATCH
~
I - - - -_
INTER-TRIAL
_+_, _ _ _ _ _ _ _ _ _ _ _ _ ?
lNTER-STIMULUS
REWARD
Figure 1: The nonmatch-to-sample task.
357
358
Eskandar, Richmond, Hertz, Optican, and Kj<er
The type of trial (match or nonmatch) and t.he pairings of sample stimuli with
nonmatch stimuli were selected randomly. A single experiment usually contained
several thousand trials; thus each of the 32 patterns appeared repeatedly under the
three conditions (sample, match, and nonmatch). Single neuron recordings from IT
cortex were carried out while the monkeys were performing the task.
SAMPLE
A
MATCH
NONMATCH
IJ
,
""
B
,"
"',
..
.....
Ji?
:,O,,,,,,
I I
?
I
?
?
"
I. , , ? I .
.. ..
" . ... , ..
"'"
,.,
I
"
..
Figure 2: Responses produced by 2 stimuli under 3 behavioural condit.ions.
Fig. 2 shows the neuronal signals produced by two different stimulus patterns in
the three behavioural conditions: sample, match and nonmatch. The lower parts
of the figure show single-trial spike trains, while the upper parts show the effective
time-dependent firing probabilities, inferred from the spike trains by convolving
Decoding of Neuronal Signals in Visual Pattern Recognition
each spike with a Gaussian kernel, adding these up for each trial and averaging the
resulting continuous signals over trials. It is evident that for a given stimulus pattern
the average signals produced in different behavioural conditions are different. In
v,,-hat follows, we proceed further to show that there is information about behavioural
condition in the signal produced in a single trial. vVe will compute its average value
explicitly.
3
DECODING NETWORKS
To compute this information we trained networks to decode the measured signal.
The form of the network is shown in Fig. 3.
spike
trains
principal
components
hidden
units
output
Figure 3: Network to decode neuronal signals for information about behavioural
condition.
The first two layers of t he network shown preprocess the spike trains as follows: We
begin with the spikes measured in an interval starting 90 msec after the stimulus
onset and lasting 255 msec. First each spike is convolved with a Gaussian kernel
to produce a continuous signal. This signal is sampled at 4-msec intervals, giving a
54-dimensional input vector. In the second step this input vector is compressed by
throwing out all hut a small number of its principal components (PC's). The PC
basis was obtained by diagonalizing the 54 x 54 covariance matrix of the inputs
computed over all trials in the experiment. The remaining PC's are then the input
to the rest of the net work, which is a standard one with one further hidden layer.
Earlier work showed that the first five PC's transmit most of the pattern information
in a neuronal response (Richmond et aI, 1987). Furthermore, the first PC is highly
correlated with the spike count. Thus, our subsequent analysis was either on the
first PC alone, as a measure of spike count, or on the first five PC's, as a measure
359
360
Eskandar, Richmond, Hertz, Optican, and Kja::r
that incorporates temporal modulation.
We trained the networks to make pairwise discriminations between responses
measured under different conditions (sample-match, sample-non match , or matchnonmatch). Thus there is a single output unit, and the target is a 1 or 0 according
to the behavioural condition under which that spike train was measured.
The final two layers of the network were trained by standard backpropa.gation of
errors for the cross-entropy cost function
(1 )
where TIJ is the target and OIA the network output produced by the input vector
xiJ for training example J-l. The output of the network with the weights that result
from this training is then the optimal estimate (given the chosen architecture) of
the probability of a behavioural condition, given the measured neuronal signal used
as input. The number of hidden units was adjusted to minimize the generalization
error, which was computed on one quarter of the data that was reserved for this
purpose.
We then calculated the mean equivocation,
f
= -(O(x) log(O(x) + [1 -
O(x)] log[l - O(x)])x,
(2)
where O(x) is the value of the output unit for input x and the average is over all
inputs. (Vie calculated this by averagng over the test or training sets; the results
were not sensitive to which one we chose.) The equivocation is a measure of the
neuron's uncertainty with respect to a given discrimination. From it we can compute
the transmitted information
I
= Ia priori -
f
= 1-
f.
(3)
The last equality follows because in our data sets the two conditions always occur
equally often.
It is evident from Fig. 2 that if we already know that our signal is produced by a
particular st.imulus pattern, the discrimination of the behavioural condition will be
easier than if we do not possess this a priori knowledge. This is because the signal
varies with stimulus as well as behavioural condition (more strongly, in fact), and
the dependence on the latter has to be sorted out from that on the former. To
get an idea of the effect of this "distraction", we performed 4 separate calculations
for each of the 3 behavioural-condition discriminations, using 1, 4, 8, and all 32
stimulus patterns, respectively.
The results
about the 3
distraction,
information
the signal).
are summarized in Fig. 4, which shows the transmitted information
different behavioural-condition discriminations at the various levels of
averaged over 5 cells. It. also indicates how much of the tra.nsmitted
in each case is contained in the spike count alone (i.e. the first PC of
It is apparent that measurable information about behavioural condition is present
in a single neuronal response, even in the total absence of a priori information about
the stimulus pattern. It is also evident that most of this information is contained in
Decoding of Neuronal Signals in Visual Pattern Recognition
en
0.5
.0
0.4
~
0
c:
....
"'C
0.3
CI>
~
E
III
c:
......co
0.2
0.1
# patterns 1
4
8 32
samplenonmatch
1
4
8 32
samplematch
1
4
8 32
matchnonmatch
Figure 4: Transmitted information for the three behavioural discriminations with
different numbers of patterns. The lower white region on each bar shows the information transmit.ted in the first PC alone.
the time-dependence of the firing: the information cont.ained in the first PC of the
signal is significantly less (paired t-test p < 0.001) and was barely out of the noise.
A finite data set can lead to a biased estimate of the transmitted information (Optican et aI, 1991). In order to control for this we made a preliminary study of
the dependence of the calculated equivocation on training set size. We varied the
number of trials available to the network in a range (64 - 1024) for one pair of
discriminations (sample vs. nonmatch). The calculated apparent equivocation increased with the sample size N, indicating a small-sample bias. The best correlation
(Pearson r = -0.86) was obtained with a fit of the form:
= foo -
(c> 0).
(4)
This gives us a systematic way to estimate the small-sample bias and thus provide an
improved estimate foo of the true equivocation. Details will be reported elsewhere.
feN)
4
CN- 1 / 2
PREDICTING NEURONAL RESPONSES
In a second set of analyses, we examined the neuronal encoding of both current and
recalled patterns. The networks were trained to predict the neuronal response (as
represented by its first 5 PC's) from the spatial pattern of the current non match
stimulus, that of the immediately preceding sample stimulus, or both. The inputs
were the pixel values of the patterns.
The network is shown in Fig. 5. In order to avoid having different architectures for
predictions from one and two input patterns, we always used a number of input units
361
362
Eskandar, Richmond, Hertz, Optican, and Kja::r
equal to twice the number of pixels in the input. In the case where the prediction
was to be made on the basis of both previous and current patterns, each pattern
was fed into half the input units. For prediction from just one pattern (either the
current or previous one), the single input pixel array was loaded separately onto
both halves of the input array. As in the previous analyses, the number of hidden
units was fixed by testing on a quarter of the data held out of the training set for
this purpose.
/'
[]:
"/'
~:
--......
--....
~
~
~
"-
Figure 5: Network for predicting neuronal responses from the stimulus. The inputs
are pixel values of the stimuli (see text), and the targets are the first 5 PC's of the
measured response.
We performed this analysis on data from 6 neurons. Not surprisingly, the predicted
waveforms were better when the input was the current pattern (normalized mean
square error (mse) = 0.482) than when it was the previous pattern (mse = 0.589).
However, the best prediction was obtained when the input reflected both the current
and previous patterns (mse = 0.422) . Thus the neurons we analyzed conveyed
information about both remembered and current stimuli .
5
CONCLUSION
The results presented here demonstrate the utilit.y of connectionist networks in analyzing neuronal information processing. \Ve have shown that temporally modulated
responses in IT cortical neurons convey information about both spatial patterns and
behavioral context. The responses also convey information about the patterns of
remembered stimuli. Based on these results , we hypothesize that inferior temporal
neurons playa role in comparing visual patterns with those presented at an earlier
time.
Decoding of Neuronal Signals in Visual Pattern Recognition
Acknowledgements
This work was supported by NATO through Collaborative Research Grant CRG
900189. EE received support from the Howard Hughes Medical Institute as an NIH
Research Scholar.
References
E N Eskandar et al (1991): Inferior temporal neurons convey information about
stimulus patterns and their behavioral relevance, Soc Neurosci Abstr 17 443; Role
of inferior temporal neurons in visual memory, submitted to J Neurophysiol.
E K Miller et al (1991): A neural mechanism for working and recognition memory
in inferior temporal cortex, Science 253
L M Optican et al (1991) : Unbiased measures of transmitted information and channel capacity from multivariate neuronal data, Bioi Cybernetics 65 305-310.
B J Richmond and T Sato (1987): Enhancement of inferior temporal neurons during
visual discrimination , J NeurophysioL 56 1292-1306.
B J Richmond et al (1987): Temporal encoding of two-dimensional patterns by
single units in primate inferior temporal cortex, J Neurophysiol 57 132-178.
L G Ungerleider and M Mishkin (1982): Two cortical visual systems, in Analysis
of Visual Behavior, ed . D JIngle, M A Goodale and R J W Mansfield, pp 549-586.
Cambridge: MIT Press.
363
| 446 |@word trial:9 covariance:1 optican:6 current:8 comparing:1 john:1 subsequent:1 hypothesize:1 discrimination:8 alone:3 v:1 selected:1 half:2 beginning:1 ial:1 mental:2 five:2 pairing:1 pathway:1 behavioral:5 pairwise:1 inter:1 behavior:1 examine:1 begin:1 matched:1 monkey:7 temporal:12 control:1 unit:8 grant:1 medical:1 before:1 vie:1 encoding:2 analyzing:1 firing:3 modulation:1 black:1 chose:1 twice:1 examined:1 co:1 walsh:1 range:1 averaged:1 testing:1 hughes:1 backpropagation:1 grasped:1 significantly:1 matching:1 suggest:1 get:1 onto:1 context:5 measurable:1 backpropa:1 starting:1 immediately:1 array:2 transmit:2 target:3 decode:3 recognition:8 role:2 thousand:1 region:1 neuropsychology:2 reward:2 goodale:1 trained:8 depend:1 basis:3 neurophysiol:3 various:2 represented:1 train:5 effective:1 pearson:1 apparent:2 encoded:2 imulus:1 compressed:1 final:1 net:1 abstr:1 enhancement:1 produce:1 object:1 measured:7 ij:1 received:1 soc:1 predicted:1 indicate:1 quantify:1 waveform:2 generalization:1 scholar:1 preliminary:1 adjusted:1 crg:1 hut:1 ungerleider:2 visually:1 predict:2 purpose:2 emad:1 sensitive:1 mit:1 behaviorally:1 gaussian:2 always:2 avoid:1 indicates:1 richmond:8 dependent:1 hidden:4 troels:1 pixel:4 priori:3 spatial:3 equal:1 having:1 ted:1 connectionist:1 stimulus:27 randomly:1 national:3 ve:1 attempt:1 possibility:1 highly:1 analyzed:1 pc:12 held:1 increased:1 earlier:2 cost:1 reported:2 varies:1 st:1 systematic:1 decoding:6 convolving:1 summarized:1 tra:1 explicitly:2 onset:1 performed:2 elicited:1 collaborative:1 minimize:1 square:1 loaded:1 reserved:1 miller:2 preprocess:1 mishkin:2 produced:6 equivocation:5 cybernetics:1 submitted:1 ed:1 sensorimotor:1 pp:1 sampled:1 knowledge:1 back:1 feed:1 reflected:1 response:14 improved:1 strongly:1 furthermore:1 just:1 until:1 correlation:1 working:1 propagation:1 indicated:1 usa:3 effect:1 normalized:1 true:1 unbiased:1 former:1 equality:1 laboratory:3 white:2 during:1 inferior:8 m:4 evident:3 complete:1 demonstrate:1 nih:1 specialized:1 quarter:2 ji:1 he:2 cambridge:1 ai:3 had:1 cortex:6 behaving:1 playa:1 multivariate:1 showed:1 remembered:3 fen:1 transmitted:5 preceding:1 barry:1 signal:19 match:10 calculation:1 cross:1 equally:1 paired:1 prediction:4 mansfield:1 ained:1 kernel:2 ion:1 cell:1 separately:1 interval:2 biased:1 releasing:2 rest:1 posse:1 recording:1 incorporates:1 ee:1 iii:1 fit:1 architecture:2 idea:1 cn:1 hile:1 whether:1 proceed:1 repeatedly:1 tij:1 xij:1 anatomical:1 nordita:2 uncertainty:1 layer:3 sato:2 occur:1 throwing:1 awake:1 performing:2 according:1 hertz:4 bethesda:3 primate:1 lasting:1 behavioural:13 count:3 mechanism:1 know:1 fed:1 available:1 hat:2 convolved:1 remaining:1 giving:1 already:1 spike:11 primary:1 dependence:3 md:3 separate:1 capacity:1 barely:1 denmark:2 cont:1 condit:1 perform:1 upper:1 neuron:13 howard:1 finite:1 varied:1 sequent:1 inferred:1 copenhagen:2 pair:1 recalled:1 bar:4 usually:1 pattern:38 appeared:3 oia:1 memory:2 lance:1 ia:1 predicting:2 pause:1 diagonalizing:1 eye:1 temporally:2 carried:1 health:2 kj:1 text:1 acknowledgement:1 conveyed:1 elsewhere:2 surprisingly:1 last:1 supported:1 bias:2 institute:4 calculated:5 cortical:3 ending:1 forward:1 made:3 nato:1 fixated:1 continuous:2 channel:1 kja:2 mse:3 investigated:1 complex:1 neurosci:1 noise:1 allowed:1 convey:4 neuronal:18 fig:6 vve:1 en:1 foo:2 waited:1 msec:6 third:1 er:1 dk:2 adding:1 ci:1 easier:1 entropy:1 neurophysiological:1 visual:13 failed:1 contained:3 bioi:1 goal:1 presentation:1 sorted:1 absence:1 gation:1 averaging:1 principal:2 total:1 discriminate:1 indicating:1 distraction:2 support:1 latter:1 modulated:1 relevance:1 correlated:1 |
3,822 | 4,460 | Maximal Cliques that Satisfy Hard Constraints with
Application to Deformable Object Model Learning
Xinggang Wang1? Xiang Bai1
Xingwei Yang2? Wenyu Liu1
Longin Jan Latecki3
1
Dept. of Electronics and Information Engineering, Huazhong Univ. of Science and Technology, China
2
Image Analytics Lab, GE Research, One Research Circle, Niskayuna, NY 12309, USA
3
Dept. of Computer and Information Sciences, Temple Univ., USA
{wxghust,xiang.bai}@gmail.com,[email protected],[email protected],[email protected]
Abstract
We propose a novel inference framework for finding maximal cliques in a weighted graph that satisfy hard constraints. The constraints specify the graph nodes
that must belong to the solution as well as mutual exclusions of graph nodes, i.e.,
sets of nodes that cannot belong to the same solution. The proposed inference is
based on a novel particle filter algorithm with state permeations. We apply the
inference framework to a challenging problem of learning part-based, deformable
object models. Two core problems in the learning framework, matching of image
patches and finding salient parts, are formulated as two instances of the problem
of finding maximal cliques with hard constraints. Our learning framework yields
discriminative part based object models that achieve very good detection rate, and
outperform other methods on object classes with large deformation.
1
Introduction
The problem of finding maximal cliques in a weighted graph is faced in many applications from
computer vision to social networks. Related work on finding dense subgraph in weighted graph
include [16, 12, 14]. However, these approaches relax the discrete problem of subgraph selection
to a continuous problem. The main drawback of such relaxation is the fact that it is impossible to
enforce that the constraints are satisfied for solutions of the relaxed problem. Therefore, we aim
at solving the discrete subgraph selection problem by employing the recently proposed extension
of particle filter inference to problems with state permeations [20]. There are at least two main
contributions of this paper: (1) We propose an inference framework for solving a maximal clique
problem that cannot be solved with typical clustering methods nor with recent relaxation based
methods [16, 12, 14]. (2) We utilize the inference framework for solving a challenging problem of
learning a part model for deformable object detection.
Object detection is one of the key challenges in computer vision, due to the large intra-class appearance variation of an object class. The appearance variation arises not only from changes in
illumination, viewpoint, color, and other visual properties, but also from nonrigid deformations.
Objects under deformation often observed large variation globally. However, their local structures
are somewhat more invariant to the deformations. Based on this observation, we propose a learning
by matching framework to match all local image patches from training image. By matching, object
parts with similar local structure in different training images can be found.
Given a set of training images that contain objects of the same class, e.g., Fig. 1(a), our first problem
is to select a set of image patches that depict the same visual part of these objects. Thus, an object
part is regarded as a collection of image patches e.g., Fig. 1(c). To solve the problem, we divide
each training image into a set of overlapping patches, like the ones shown in Fig. 1(b), and construct
a graph whose nodes represent the patches. The edge weights represent the appearance similarity of
pair of patches. Since close by patches in the same image tend to be very similar, we must impose
?
?
This work was done while the author visiting Temple University.
This work was done when the author was a graduate student at Temple University.
1
Figure 1: (a) example training images; (b) patches extracted from the training images; (c) object
parts as collections of patches obtained as maximal cliques of patch similarity graph; (d) the learned
salient parts for giraffe, the patches belong to the same salient part are in the same color. The salient
parts are obtained as maximal cliques in a second graph whose vertices represent the object parts.
a hard constraint that a patch set representing the same object part does not contain two patches
from the same image. This constraint is very important, since otherwise very similar patches from
the same images will dominate this graph. In order to obtain meaningful object parts, we define an
object part as a maximal clique in the weighted graph that satisfies the above constraint. By solving
the problem of maximal clique, we obtain a set of object parts like the ones shown in Fig. 1(c). We
use this set as vertices of a second graph. Finally, we obtain a small set of salient visual parts, e.g.,
Fig. 1(d), by solving a different instance of the maximal clique problem on the second graph.
For each salient visual part, we train a discriminative classifier. By combining these classifiers
with spatial distribution of the salient object parts, a detector for deformable object is built. As
illustrated in the experimental results, this detector achieves very good object detection performance,
and outperforms other methods on object classes with large deformation.
The computer vision literature has approached learning of part based object models in different
ways. In [8] objects are modeled as flexible constellations of parts, parts are constrained to a sparse
set of locations determined by an entropy-based feature detector, other part models based on feature
detector include [15, 17]. Our model is similar to discriminatively trained part based model in [6] in
that we train SVM classifiers for each part of object and geometric arrangement of parts is captured
by a set of ?springs?. However, our learning method is quite different from [6]. In [6] the learning
problem is formalized as latent SVM, where positions of parts are considered as latent values. The
learning process is an iterative algorithm that alternates between fixing latent values and optimizing
the latent SVM objective function. In contrast, we case part learning as finding maximal cliques in
a weighted graph of image patches. The edge weights represent appearance similarities of patches.
In [4, 13] multiple instance learning is used to search position of object parts in training images, and
boosting algorithm is used to select salient parts to represent object.
2
Maximal Cliques that Satisfy Hard Constraint
A weighted graph G is defined as G = (V, E, e), where V = {v1 , . . . , vn } is the vertex set, n is the
number of vertices, E ? V ? V , and e : E ? R?0 is the weight function. Vertices in G correspond
to data points, edge weights between different vertices represent the strength of their relationships,
and self-edge weight respects importance of a vertex. As is customary, we represent the graph G
with the corresponding weighted adjacency matrix, more specifically, an n ? n symmetric matrix
A = (aij ), where aij = e(vi , vj ) if (vi , vj ) ? E, and aij = 0 otherwise.
Let S = {1, ..., n} be the index set of vertex set V . For any subset T ? S, GT denotes a subgraph
of G with vertex set VT = {vi , i ? T } and edge set ET =P{(vi , vj ) | (vi , vj ) ? E, i ? T, j ? T }.
The total weight of subgraph GT is defined as f (GT ) = i?T,j?T A(i, j). We can express T by
an indicator vector x = (x1 , . . . , xn ) ? {0, 1}n such that xi = 1 if i ? T and xi = 0 otherwise.
Then f (GT ) can be represented in a quadratic form f (x) = xT Ax.
2
We consider mutex relationship between vertices in graph. Given a subset of vertices M ? S,
we call M a mutex (short for mutual exclusion) if i ? M and j ? M implies that vertices vi
and vj can not belong to the same maximal clique. Formally, M is a constraint on the indicator
vector x ? {0, 1}n , i.e., if i ? M and j ? M , then xi + xj ? 1. A mutex set of graph G is
M = {M1 , . . . , Mm | Mi ? S, i = 1, . . . , m} such that each Mi is a mutex for i = 1, . . . , m.
Given a set T ? S, we define mutex(T ) as a set of indices of vertices of G that are incompatible
with T according to M: mutex(T ) = {j ? S|?Mi ?M ?k?T j, k ? Mi }. We consider the following
maximization problem
maximize
f (x) = xT Ax
subject to
(C1) x = (x1 , . . . , xn ) ? {0, 1}n and
(C2) ?i ? U xi = 1 and
(C3) xi + xj ? 1 if ?Mk ? M such that i, j ? Mk and
(C4) ?x ? K
x
(1)
The constraint (C2) specifies a set of vertices U ? S that must be selected as part of the solution,
(C3) ensures that all mutex constraints are satisfied, (C4) requires number of vertices in the solution
is small or equal to K. Of course, we assume the problem (1) is well-defined in that there exists x
that satisfies the four constraints (C1)-(C4).
The goal of (1) is to select a subset of vertices of graph G such that f is maximized and the constraints (C1)-(C4) are satisfied. Since f is the sum of pairwise affinities of the elements of the
selected subset, the larger is the subset, the larger is the value of f . However, the size of the subset
is limited by the mutex constraints (C3) and maximal size constraint (C4).
A global maximum of (1) is called a U ? M maximal clique of graph G. When both sets U and M
are clear from the context, we simply call the solution a maximal clique.
The problem (1) is a combinatorial optimization problem, and hence it is NP-hard [2]. As is the case
for similar problems of finding dense subgraphs, the constraint (C1) is usually relaxed to x ? [0, 1]n ,
i.e., each coordinate of x is relaxed to a continuous variable in the interval [0, 1], e.g., [16, 12, 14].
However, it is difficult if not impossible to ensure that constraints (C2), (C3) and (C4) are satisfied
then. Another difficulty is related to discretization of the relaxed solution in order to obtain a solution
that satisfies (C1). For these reasons, and since for our application, it is very important that the
constraints are satisfied, we treat (C1)-(C4) as hard constraints that cannot be violated. We propose
an efficient method for directly solving (1) in Section 4. We first present two instances of problem
(1) in Section 3, where we describe the proposed application to learning salient object parts.
3
Learning by Matching
In this section, we present a novel framework to learn part based object model based on matching.
The core problems of learning part based object model are how to search right locations of an object
part in all training images and how to select salient parts for representing object. In our framework,
the two problems are formulated as finding maximal cliques with hard constraints.
3.1
Matching Image Patches
Given a batch of training images I = {I1 , . . . , IK } showing objects from a given class, e.g., Fig. 1
(a), where K is the total number of training images. For every training image, we densely extract
image patches with overlap. We denote the set of patches extracted from all images as {P1 , . . . , Pn },
where n is total number of patches. Each patch is described as Pi = {Fi , Li , Xi , Yi } for i ?
[1, . . . n], where Fi is the appearance descriptor of Pi (we use the descriptor from [19]), Li is the
image label of Pi , (e.g., if Pi is extracted from the 5th training image, Li = 5), Xi and Yi indicate
the position of Pi in its image. All the training images are normalized to the same size.
We treat all the patches as the set of vertices of graph G, i.e., V = {P1 , . . . , Pn }. The affinity
relation between the patches, i.e., the graph edge weights, are defined as aij = Fi ? Fj , if i 6= j, and
aij = 0 otherwise, where Fi ? Fj is the dot product of two feature vectors, which are normalized. It
measures the appearance similarity of patches Pi and Pj . In addition, if the distance between patch
positions (Xi , Yi ) and (Xj , Yj ) is larger than 0.2 of the mean of all bounding box heights, we set
aij =0. This ensures that matrix A is sparse.
3
We have exactly K mutex constraints M = {M1 , . . . , MK }, where Mj contains all patches from
image Ij , i.e., Mj = {Pi ? V |Li = j}, j ? [1, . . . , K]. This means that we do not want two
patches from the same image to belong to the same maximal clique.
Suppose that the first r patches P1 , . . . , Pr are in the 1st training image, i.e., Li = 1 if and only if
i = 1, . . . , r. The part learning algorithm by finding maximal cliques is given in Alg. 1.
Algorithm 1 Part learning by finding maximal cliques with hard constraints
Input: A, M, K, and r.
for i = 1 ? r do
1. Set U = {i}.
2. Solve problem (1), get the solution x? , and its value W (i) = f (x? ) = x? T Ax? .
3. Set the solution patches as Q(i) = {Pj |x?j = 1}.
end for
Output: Parts Q = {Q(1), . . . , Q(r)} and their matching weights W = {W (1), . . . , W (r)}.
We recall that each learned part Q(i) is defined as a set of K patches, e.g., Fig. 1 (c). Due to our
mutex constraint, each Q(i) contains exactly one patch from each of K training images. We treat
the learned parts as candidate object parts, because there are non-object areas inside the bounding
box images. Each value W (i) represents a matching score of of Q(i).
3.2 Selecting Salient Parts for Part Based Object Representation
In order to select a set of object parts that best represent the object class, our strategy is to find a
subset of Q that maximizes the sum of the matching scores. We formulate this problem as finding
maximal clique with hard constraints again. We define a new graph H with vertices V = Q and
adjacency matrix B = (bij ), where bij = W (i) if i = j, and bij = 0 otherwise. Thus, the matrix of
graph H has nonzero entries only on diagonal. It may appear that the problem is trivial, since there
is no edges between different vertices of H, but this is not the case due to the mutex relations.
The mutex set MH = {M1H , . . . , MrH } is defined as MiH = {j | D(i, j) ? ? } for i, j ? [1, . . . , r],
where ? is a distance threshold and D(i, j) is the average distance between patches in Q(i) and Q(j)
that belong to the same image. If Q(i) is selected as a salient part, the mutex MiH ensures that the
patches of other salient parts are not too close to the patches of Q(i). For example, Q(1) and Q(2)
in Fig. 1(c) both have good matching weights, but the average distance between Q(1) and Q(2) is
smaller than ? , so they cannot be selected as salient parts at the same time.
As initialization (C2), we set U H to a one element set containing arg maxi W (i), so the part with
maximal matching score is always selected as a salient part. We set K in (C4) to K H , where K H is
the maximal number of salient parts. K H = 6 in all our experiments.
By solving the second instance of problem (1) for B, U H , MH , K H , we obtain the set of salient
parts as the solution x? . We denote is as SP = {Q(j) | x? (j) = 1}.
4
Particle Filter Inference for U ? M Maximal Clique
By associating a random variable (RV) Xi with each vertex i ? S of graph G, we introduce a Gibbs
random field (GRF) with the neighborhood structure of graph G. Each RV can be assigned either 1
or 0, where Xi = 1 means that the vertex vi is selected as part of the solution. The probability of
the assignment of values to all RVs is defined as
P (X1 = x1 , . . . , Xn = xn ) = p(x) ? exp
f (x)
xT Ax
= exp
,
?
?
(2)
where we recall that x = (x1 , . . . , xn ) ? {0, 1}n and ? > 0. We observe that the definition in (2)
also applies to a subset of RVs, i.e., we can use it to compute P (Xi1 = xi1 , . . . , Xik = xik ) =
f (xi1 ,...,xik )
p(xi1 , . . . , xik ) ? exp
for k < n. This is equivalent to setting other coordinates in the
?
indicator vector x to zero.
Since exp is a monotonically increasing function, the maximum of (2) is obtained at the same point
as the maximum of f in (1). We propose to utilize Particle Filter (PF) framework to maximize (2)
subject to the constraints in (1). The goal of PF is to approximate p(x) with a set of with weighted
4
samples {x(i) , w(x(i) )}N
i=1 drawn from some proposal distribution q. Under reasonable assumptions
on p(x) this approximation is possible with any precision if N is sufficiently large [3].
Since it is still computationally intractable to draw samples from q due to high dimensionality of x,
PF utilizes Sequential Importance Sampling (SIS). In the classical PF approaches, samples are gen(i)
erated recursively following the order of the RVs according to xt ? q(xt |x1:t?1 ) for t = 1, . . . n,
(i)
(i)
(i)
and the particles are built sequentially x1:t = (x1:t?1 , xt ) for i = 1, . . . , N . The subscript t in xt
(i)
in q(xt |x1:t?1 ) indicates from which RV the samples are generated. We use x1:t as a shorthand nota(i)
(i)
(i)
tion for (x1 , . . . , xt ). When t = m we obtain that x1:m ? q(x1:m ). In other words, by sampling
(i)
recursively xt from proposal distribution q(xt |x1:t?1 ) of RV with index t, we obtain a sample from
q(x1:m ) at t = m. As is common in PF applications, we set q(xt |x1:t?1 ) = p(xt |x1:t?1 ), i.e., the
proposal distribution is set to the conditional distribution of p.
We observe that the order of sampling follows the indexing of RVs with the index set S. However,
there is not natural order of RVs on GRF, and the order of RV indices in S does not have any
particular meaning in that this order is not related in any way to our objective function f . The
classical PF framework has been developed for sequential state estimation like tracking or robot
localization [5], where observations arrive sequentially, and consequently, determine a natural order
of RVs representing the states like locations. In a recent work [20], PF framework has been extended
to work with unordered set of RVs for solving image jigsaw puzzles. Inspired by this work, we
extend PF framework to solve U ? M maximal clique problem in the weighted graph. Unlike
tracking a moving object, in our problem, the observations are known from the beginning and are
given by the affinity matrix A.
The key idea of [20] is to explore different orders of the states (xi1 , . . . , xin ) as opposed to utilizing
the fix order of the states x = (x1 , . . . , xn ) determined by the index of RVs as in the standard PF.
(States are assigned values of RVs.) To achieve this the first step of the PF algorithm is modified so
that the importance sampling is performed for every RV not yet represented by the current particle.
To formally define the sampling rule, we need to explicitly represent different orders of states with
an index selection function ? : {1, . . . , t} ? {1, . . . , n} for 1 < t ? n, which is one-to-one.
In particular, when t = n, ? is a permutation. We use the shorthand notation ?(1 : t) to denote
(?(1), ?(2), . . . , ?(t)) for t ? n, and similarly, x?(1:t) = (x?(1) , x?(2) , . . . , x?(t) ). Each particle
(i)
x?(1:t) can now have a different permutation ? (i) representing the indices of RVs with assigned
values. Thus, a sequence of RVs visited before time t is described by a subsequence (i1 , . . . , it ) of
t different numbers in S = {1, . . . , n}.
We define an index set of indices of graph vertices that are compatible with selected vertices in
? (i) (1 : t) as ?(? (i) (1 : t)) = S \ ( ? (i) (1 : t) ? mutex(? (i) (1 : t) ). Hence ?(? (i) (1 : t)) contains
indices from S that that are both not present in ? (i) (1 : t) and not have mutex relation with the
members of ? (i) (1 : t).
We are now ready to formulate the proposed importance sampling. At each iteration t ? n, for each
(i)
(i)
particle (i) and for each s ? ?(? (i) (1 : t ? 1)), we sample xs ? p(xs |x?(1:t?1) ). The subscript s at
the conditional pdf p indicates that we sample values for RV with index s. We generate at least one
(i)
sample for each s ? ?(? (i) (1 : t ? 1)). This means that the single particle x?(1:t?1) is multiplied
(i)
and extended to several follower particles x?(1:t?1),s .
Based on (2), it is easy to derive a formula for the proposal function:
f (x
p(xs |x?(1:t?1) ) =
,x )
?(1:t?1) s
exp
p(x?(1:t?1) , xs )
f (x?(1:t?1) , xs ) ? f (x?(1:t?1) )
?
=
= exp
f
(x
)
?(1:t?1)
p(x?(1:t?1) )
?
exp
?
(3)
We observe that f (xs , x?(1:t?1) ) ? f (x?(1:t?1) ) = xTs Axs + 2xTs Ax?(1:t?1) is the gain in the
target function f obtained after assigning the value to RV Xs . Since we are interested in making
this gain as large as possible, and assigning xs = 0 leads to zero gain, we focus only on assigning
xs = 1. Consequently, the pdf in (3) can be treated as a probability mass function (pmf) over
5
s ? ?(? (i) (1 : t ? 1)) and sampling from it becomes equivalent to sampling
(i)
s(i) ? p(s|? (i) (1 : t ? 1)) = p(xs = 1|x?(1:t?1) ).
(4)
(i)
Hence, we can interpret a particle x?(1:t?1) as a sequence of indices of selected graph vertices
(i)
? (i) (1 : t ? 1), since x?(1:t?1) is a vector of ones assigned to RVs with indices in ? (i) (1 : t ? 1).
(i)
In other words, it holds ind(x?(1:t?1) ) = ? (i) (1 : t ? 1), where ind : {0, 1}n ? 2S is a function
that assigns to x a set of indices of coordinates of x that are equal to one. For example, if x =
(0, 1, 1, 0, 0) ? {0, 1}5 , then ind(x) = {2, 3}, which means that graph vertices with indices 2 and 3
are selected by x.
In order to construct the pmf in (4), we only need to assign the probabilities to all indices s ?
?(? (i) (1 : t ? 1)) according to the definition in (3). Then s(i) is sampled from the discrete pmf
constructed this way. Now we are ready to summarize the proposed PF framework in Algorithm 2.
Algorithm 2 Particle Filter Algorithm for U ? M Maximal Clique
Input: A, U , M, K, N , ?.
(i)
Initialize: t = 1, initialize every particle (i) with ?1 = U for i = 1, . . . , N .
while ?(? (1) (1 : t ? 1)) ? . . . ? ?(? (N ) (1 : t ? 1)) 6= ? and t ? K do
for i = 1 ? N do
if ?(? (i) (1 : t ? 1)) 6= ? then
(i)
1. Importance sampling / proposal: Sample followers xs of particle (i) from
(i)
(i)
(i)
xs(i) ? p(xs |x?(1:t?1) ) = exp((f (xs , x?(1:t?1) ) ? f (x?(1:t?1) ))/?)
(i,s)
(i)
(i)
and set x?(1:t) = (x?(1:t?1) , xs ) and ? (i,s) (t) = s, i.e., ? (i,s) (1 : t) = (?(1 : t ? 1), s).
2. Importance weighting / evaluation: An individual importance weight is assigned to
each follower particle according to
(i,s)
(i)
w(x?(1:t) ) = exp(f (x(i)
s , x?(1:t?1) )/?)
else
(i,s)
(i)
(i,s)
(i)
we carry over the particle: x?(1:t) = x?(1:t?1) and w(x?(1:t) ) = w(x?(1:t?1) ).
end if
end for
(1,s)
(N,s)
3. Resampling: Sample with replacement N new particle filters from {x?(1:t) , . . . , x?(1:t) }
(1)
(N )
according to weights, and assign the sampled set to {x?(1:t) , . . . , x?(1:t) }; set t ? t + 1.
end while
(1)
(N )
Output: {x?(1:t) , . . . , x?(1:t) }
We take the particle with maximal value of f as solution of (2), or equivalently, as solution of (1):
(k)
(i)
x? = x?(1:t) , where k = arg maxi f (x?(1:t) ). As proven in [20], x? approximates maxx p(x) with
any precision for sufficiently large number of particles N .
5
Object Detection with the Deformable Part Model
In Section 3.2, we find K H salient parts denoted as SP = {Qi |i = 1, . . . , K H } to represent an
object class, each part Qi contains K image patches, one patch from each training image. Now we
describe the object model constructed from SP.
We train a linear SVM classifier for each part Qi , which we denote as SV M (Qi ). To train the linear
SVM classifier SV M (Qi ), positive examples are the patches of Qi . The negatives examples are
obtained by an iterative procedure described in [10]. The initial training set consists of randomly
chosen background windows and objects from other classes. The resulting classifier is used to scan
images and select the top false positives as hard examples. These hard examples are added to the
negative set and a new classifier is learned. This procedure is repeated several times to obtain the
final classifier.
6
As in [6], we capture the spatial distribution of salient parts in SP with a star model, where the
location of each part is expressed as an offset vector with respect to the model center. The offset is
learned from the offsets of the patches in Qi to the centers of their training images (bounding boxes)
containing them.
In order to be able to directly compare to Latent SVM [6], we use the same object detection framework. Thus, the detection is performed in the sliding window fashion followed by non maxima
suppression. However, we do not use the root filter, which is an appearance classifier of the whole
detection window. Thus, our detection is purely part based.
6
Experimental Evaluation
We validate our method on two datasets with deformable objects: ETHZ Giraffes dataset [9] and
TUD-Pedestrians dataset [1]. For ETHZ Giraffes dataset, we follow the train/test split described in
[18]: the first 43 giraffe images are positive training examples. The remaining 44 giraffe images in
ETHZ dataset are used for testing as positive images. We also select 43 images from other categories
as negative training images. As negative test images we take all remaining images from the other
categories. Thus, we have the total of 86 training images, and the total of 169 test images. For
learning the salient parts, the giraffe bounding boxes are normalized to the area of 3000 pixels with
aspect ratio kept.
For TUD-Pedestrians dataset, we use the provided 400 images for training and 250 images for
testing. The background of training images is used to extract negative examples. The training
pedestrian bounding boxes are normalized to the height of 200 pixels with aspect ratio kept.
For both datasets, the size of each patch is 61 ? 61 pixels, number of patches per image is about
1000. We set K H in (C4) to 6 meaning that our goal is to learn 6 salient parts for each object class.
The number of salient part was determined experimentally. The minimal distance ? between salient
parts is 60 pixels for the giraffe class and 45 pixels for the pedestrian class. In Algorithm 2, the
normalization parameter ? is set to the median value in A times the size of expected maximal clique
times 2, the number of particles is N = 500, and for each particle we sample 10 followers. In order
to compare to [6], we used the released latent SVM code [7] on the same training and testing images
as for our approach.
6.1
Detection Performance
We plot the precision/recall (PR) curves to show the detection performance of the latent SVM
method [6] and our method on both test dataset in Fig. 2. On the ETHZ giraffe class our average precision (AP) is 0.841, it is much better than AP of the latent SVM which is 0.610. Our
result significantly outperforms the currently best reported result in [18], which has AP of 0.787.
On the TUD-Pedestrian dataset, our AP of 0.862 is comparable to the latent SVM, whose AP is
0.875. These results show that our method can learn object models that yield very good detection
performance. Our method is particularly suitable for learning part models of objects with large deformation like giraffes. The significant nonrigid deformation of giraffes leads to a large variation in
the position of patches representing the same object part. Since latent SVM learning is based on incremental improvement in the position of parts, it seems to be unable to deal with large variations of
part positions. In contrast, this does not influence the performance of our method, since it is matching based. Because the variance in the part positions in TUD-Pedestrian dataset is smaller than in
giraffes, the performance of both methods becomes comparable. Some of our detection results are
shown in Fig. 3. They demonstrate that our learned part model leads to detection performance that is
robust to the scale changes, appearance variance, part location variance, and substantial occlusion.
ETHZ Giraffes
TUD?Pedestrians
1
1
0.9
0.9
0.8
0.8
0.7
0.6
precision
precision
0.7
0.5
0.4
0.5
0.4
0.3
0.3
0.2
Our method, AP=0.841
Latent SVM, AP=0.610
0.1
0
0.6
0
0.2
0.4
0.6
Our method, AP=0.862
Latent SVM, AP=0.875
0.2
0.8
0.1
1
recall
0
0.2
0.4
0.6
0.8
1
recall
Figure 2: Precision/recall curves for Latent SVM method (red) and our method (blue) on ETHZ
Giraffe dataset (left) and TUD-Pedestrian dataset (right).
7
Figure 3: Some of our detection results for giraffe class and pedestrian dataset. The detected patches
with the same color belong to the same salient part. The part colors are the same as in Fig. 4.
Detected bounding boxes are shown in blue.
6.2 Tree Structure of Salient Parts
In our framework, it is also possible to learn a tree structure of the salient parts. Given a set of
learned salient parts SP = {Qi |i = 1, . . . , K H } as vertices, we construct a new graph, called
Salient Part Graph (SPG). The edge weights of SPG are given by the average distance between pairs
of salient parts Qi and Qj given by D(i, j) for i, j = 1, . . . , K H .
Figure 4: The learned salient parts and graph structures for the giraffe class and pedestrian dataset.
The patches that belong to the same salient part are in the same color.
We obtain a minimum spanning tree of SPG using the Kruskal?s algorithm [11]. The learned trees
for two object classes of giraffes and pedestrians are illustrated in Fig. 4. Their connections yield
a salient part structure in accord with our intuition. We did not utilize this tree structure for object
detection. Instead we used the star model in our detection results in order to have a fair comparison
to [6].
7
Conclusions
An object part is defined as a set of image patches. Learning object parts is formulated as two
instances of the problem of finding maximal cliques in weighted graphs that satisfy hard constraints,
and solved with the proposed Particle Filter inference framework. By utilizing the spatial relation of
the obtained salient parts, we are also able to learn a tree structure of the deformable object model.
The application of the proposed inference framework is not limited to learning object part models.
There exist many other applications where it is important to enforce hard constraints like common
pattern discovery and solving constrained matching problems.
Acknowledgement: The work was supported by the NSF under Grants IIS-0812118, BCS-0924164,
OIA-1027897, by the AFOSR Grant FA9550-09-1-0207, and by the National Natural Science Foundation of China (NSFC) Grants 60903096, 61173120 and 60873127.
8
References
[1] M. Andriluka, S. Roth, and B. Schiele. People-tracking-by-detection and people-detection-by-tracking.
IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2008.
[2] Y. Asahiro, R. Hassin, and K. Iwama. Complexity of finding dense subgraphs. Discrete Applied Mathematics, 121:15 ? 26, 2002.
[3] D. Crisan and A. Doucet. A survey of convergence results on particle filtering methods for practitioners.
IEEE Transactions on Signal Processing, 50(3):736?746, 2002.
[4] P. Dollar, B. Babenko, S. Belongie, P. Perona, and Z. Tu. Multiple component learning for object detection. ECCV, 2008.
[5] A. Eliazar and P. Ronald. Hierarchical linear/constant time slam using particle filters for dense maps. In
Advances in Neural Information Processing Systems 18, pages 339?346. 2006.
[6] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No.
9, 2010.
[7] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Discriminatively trained deformable part models,
release 4. http://people.cs.uchicago.edu/ pff/latent-release4/.
[8] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning.
Proc. of the IEEE Conf on Computer Vision and Pattern Recognition, 2003.
[9] V. Ferrari, T. Tuytelaars, and L. V. Gool. Object detection by contour segment networks. ECCV, 2006.
[10] H. Harzallah, F. Jurie, and C. Schmid. Combining efficient object localization and image classification.
In International Conference on Computer Vision, 2009.
[11] J. B. Kruskal. On the shortest spanning subtree of a graph and the traveling salesman problem. In
Proceedings of the American Mathematical Society, 1956.
[12] M. Leordeanu, M. Hebert, and R. Sukthankar. An integer projected fixed point method for graph matching
and map inference. In Neural Info. Proc. Systems (NIPS), 2009.
[13] Z. Lin, G. Hua, and L. S. Davis. Multiple instance feature for robust part-based object detection. IEEE
Conference on Computer Vision and Pattern Recognition, 2009.
[14] H. Liu, L. J. Latecki, and S. Yan. Robust clustering as ensemble of affinity relations. In Neural Info. Proc.
Systems (NIPS), 2010.
[15] N. Loeff, H. Arora, A. Sorokin, and D. Forsyth. Efficient unsupervised learning for localization and
detection in object categories. In Advances in Neural Information Processing Systems 18, pages 811?818.
2006.
[16] M. Pavan and M. Pelillo. Dominant sets and pairwise clustering. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 29:167-172, 2007.
[17] A. Quattoni, M. Collins, and T. Darrell. Conditional random fields for object recognition. In Advances in
Neural Information Processing Systems 17, pages 1097?1104. 2005.
[18] P. Srinivasan, Q. Zhu, and J. Shi. Many-to-one contour matching for describing and discriminating object
shape. IEEE Conference on Computer Vision and Pattern Recognition, 2010.
[19] X. Wang, X. Bai, W. Liu, and L. J. Latecki. Feature context for image classification and object detection.
IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2011.
[20] X. Yang, N. Adluru, and L. J. Latecki. Particle filter with state permutations for solving image jigsaw
puzzles. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2011.
9
| 4460 |@word seems:1 recursively:2 carry:1 electronics:1 liu:2 contains:4 score:3 selecting:1 initial:1 bai:2 outperforms:2 current:1 com:2 discretization:1 babenko:1 si:1 gmail:1 yet:1 must:3 follower:4 assigning:3 ronald:1 shape:1 plot:1 depict:1 resampling:1 intelligence:2 selected:9 beginning:1 core:2 short:1 fa9550:1 boosting:1 node:4 location:5 height:2 mathematical:1 c2:4 constructed:2 ik:1 shorthand:2 consists:1 inside:1 introduce:1 pairwise:2 expected:1 p1:3 nor:1 mih:2 inspired:1 globally:1 pf:11 window:3 increasing:1 latecki:4 becomes:2 provided:1 notation:1 maximizes:1 mass:1 developed:1 finding:13 every:3 exactly:2 classifier:9 ramanan:1 grant:3 appear:1 before:1 positive:4 engineering:1 local:3 treat:3 nsfc:1 subscript:2 ap:9 initialization:1 china:2 challenging:2 limited:2 analytics:1 graduate:1 jurie:1 yj:1 testing:3 procedure:2 jan:1 area:2 yan:1 maxx:1 significantly:1 matching:15 word:2 get:1 cannot:4 close:2 selection:3 context:2 impossible:2 influence:1 sukthankar:1 equivalent:2 map:2 center:2 roth:1 shi:1 survey:1 formulate:2 formalized:1 assigns:1 subgraphs:2 rule:1 utilizing:2 regarded:1 dominate:1 ferrari:1 variation:5 coordinate:3 target:1 suppose:1 element:2 recognition:8 particularly:1 erated:1 observed:1 solved:2 capture:1 wang:1 ensures:3 substantial:1 intuition:1 schiele:1 complexity:1 trained:3 solving:10 segment:1 purely:1 localization:3 mh:2 represented:2 train:5 univ:2 describe:2 detected:2 approached:1 neighborhood:1 whose:3 quite:1 larger:3 solve:3 cvpr:3 relax:1 otherwise:5 tuytelaars:1 final:1 sequence:2 propose:5 maximal:29 product:1 tu:1 combining:2 gen:1 subgraph:5 achieve:2 deformable:8 grf:2 validate:1 convergence:1 darrell:1 incremental:1 object:65 derive:1 fixing:1 ij:1 pelillo:1 c:1 implies:1 indicate:1 drawback:1 filter:10 mcallester:2 adjacency:2 assign:2 fix:1 extension:1 mm:1 hold:1 sufficiently:2 considered:1 exp:9 puzzle:2 kruskal:2 achieves:1 released:1 estimation:1 proc:3 combinatorial:1 label:1 visited:1 currently:1 weighted:10 always:1 aim:1 modified:1 pn:2 crisan:1 ax:6 focus:1 release:1 improvement:1 indicates:2 contrast:2 suppression:1 dollar:1 wang1:1 inference:10 perona:2 relation:5 i1:2 interested:1 pixel:5 arg:2 classification:2 flexible:1 denoted:1 spatial:3 constrained:2 initialize:2 mutual:2 andriluka:1 equal:2 construct:3 field:2 sampling:9 represents:1 unsupervised:2 hassin:1 np:1 tud:6 randomly:1 densely:1 national:1 individual:1 occlusion:1 replacement:1 detection:25 intra:1 evaluation:2 slam:1 nota:1 edge:8 tree:6 divide:1 pmf:3 circle:1 deformation:7 girshick:2 minimal:1 mk:3 instance:7 temple:4 assignment:1 maximization:1 vertex:26 subset:8 entry:1 too:1 reported:1 pavan:1 sv:2 st:1 international:1 discriminating:1 xi1:5 again:1 satisfied:5 containing:2 opposed:1 conf:4 american:1 li:5 huazhong:1 wenyu:1 unordered:1 student:1 star:2 pedestrian:11 forsyth:1 satisfy:4 explicitly:1 hust:1 vi:7 tion:1 performed:2 root:1 lab:1 jigsaw:2 liu1:1 red:1 contribution:1 descriptor:2 variance:3 maximized:1 yield:3 correspond:1 ensemble:1 detector:4 quattoni:1 definition:2 mi:4 gain:3 sampled:2 dataset:12 recall:6 color:5 dimensionality:1 follow:1 specify:1 zisserman:1 bai1:1 done:2 box:6 traveling:1 overlapping:1 usa:2 contain:2 normalized:4 hence:3 assigned:5 symmetric:1 nonzero:1 illustrated:2 deal:1 ind:3 self:1 davis:1 nonrigid:2 pdf:2 demonstrate:1 fj:2 image:56 meaning:2 novel:3 recently:1 fi:4 common:2 belong:8 extend:1 m1:2 approximates:1 interpret:1 significant:1 gibbs:1 mathematics:1 similarly:1 particle:25 dot:1 moving:1 robot:1 similarity:4 gt:4 dominant:1 exclusion:2 recent:2 optimizing:1 vt:1 yi:3 captured:1 minimum:1 relaxed:4 somewhat:1 impose:1 determine:1 maximize:2 shortest:1 monotonically:1 signal:1 ii:1 rv:20 multiple:3 sliding:1 bcs:1 match:1 lin:1 qi:9 vision:10 iteration:1 represent:10 normalization:1 accord:1 longin:1 c1:6 proposal:5 addition:1 want:1 background:2 interval:1 else:1 median:1 unlike:1 subject:2 tend:1 member:1 call:2 practitioner:1 integer:1 yang:2 split:1 easy:1 xj:3 associating:1 idea:1 cn:1 qj:1 clear:1 category:3 generate:1 specifies:1 outperform:1 exist:1 http:1 nsf:1 adluru:1 per:1 blue:2 discrete:4 vol:1 express:1 srinivasan:1 key:2 salient:33 four:1 threshold:1 drawn:1 pj:2 utilize:3 kept:2 v1:1 graph:35 relaxation:2 sum:2 arrive:1 reasonable:1 vn:1 patch:45 utilizes:1 draw:1 loeff:1 incompatible:1 comparable:2 followed:1 quadratic:1 sorokin:1 strength:1 constraint:28 aspect:2 spring:1 according:5 alternate:1 smaller:2 making:1 invariant:2 pr:2 indexing:1 eliazar:1 computationally:1 describing:1 ge:2 end:4 salesman:1 multiplied:1 apply:1 mutex:15 observe:3 hierarchical:1 enforce:2 batch:1 customary:1 denotes:1 clustering:3 include:2 ensure:1 top:1 remaining:2 classical:2 society:1 objective:2 arrangement:1 added:1 strategy:1 diagonal:1 visiting:1 affinity:4 distance:6 unable:1 trivial:1 reason:1 spanning:2 code:1 modeled:1 relationship:2 index:17 ratio:2 equivalently:1 difficult:1 xik:4 info:2 negative:5 observation:3 datasets:2 extended:2 pair:2 c3:4 connection:1 c4:9 learned:9 nip:2 able:2 usually:1 pattern:9 challenge:1 summarize:1 built:2 oia:1 gool:1 overlap:1 suitable:1 difficulty:1 natural:3 treated:1 indicator:3 zhu:1 representing:5 technology:1 arora:1 ready:2 extract:2 schmid:1 faced:1 literature:1 geometric:1 discovery:1 acknowledgement:1 xiang:2 afosr:1 discriminatively:3 permutation:3 filtering:1 proven:1 foundation:1 viewpoint:1 pi:7 eccv:2 course:1 compatible:1 supported:1 hebert:1 aij:6 uchicago:1 felzenszwalb:2 sparse:2 curve:2 xn:6 contour:2 author:2 collection:2 projected:1 employing:1 harzallah:1 social:1 transaction:3 approximate:1 clique:25 global:1 sequentially:2 doucet:1 belongie:1 discriminative:2 xi:10 fergus:1 subsequence:1 continuous:2 latent:14 iterative:2 search:2 yang2:1 learn:5 mj:2 robust:3 alg:1 spg:3 vj:5 sp:5 giraffe:16 dense:4 main:2 did:1 bounding:6 whole:1 repeated:1 fair:1 x1:18 fig:12 fashion:1 ny:1 precision:7 position:8 candidate:1 weighting:1 bij:3 formula:1 xt:15 showing:1 constellation:1 maxi:2 x:15 svm:14 offset:3 exists:1 intractable:1 false:1 sequential:2 importance:7 illumination:1 subtree:1 pff:1 entropy:1 simply:1 appearance:8 explore:1 visual:4 expressed:1 tracking:4 release4:1 leordeanu:1 applies:1 hua:1 satisfies:3 extracted:3 conditional:3 goal:3 formulated:3 consequently:2 hard:14 change:2 experimentally:1 specifically:1 determined:3 typical:1 total:5 called:2 experimental:2 xin:1 meaningful:1 select:7 formally:2 people:3 arises:1 scan:1 collins:1 violated:1 ethz:6 dept:2 |
3,823 | 4,461 | Transfer Learning by Borrowing Examples
for Multiclass Object Detection
Joseph J. Lim
CSAIL, MIT
[email protected]
Ruslan Salakhutdinov
Department of Statistics, University of Toronto
[email protected]
Antonio Torralba
CSAIL, MIT
[email protected]
Abstract
Despite the recent trend of increasingly large datasets for object detection, there
still exist many classes with few training examples. To overcome this lack of training data for certain classes, we propose a novel way of augmenting the training
data for each class by borrowing and transforming examples from other classes.
Our model learns which training instances from other classes to borrow and how
to transform the borrowed examples so that they become more similar to instances
from the target class. Our experimental results demonstrate that our new object
detector, with borrowed and transformed examples, improves upon the current
state-of-the-art detector on the challenging SUN09 object detection dataset.
1
Introduction
Consider building a sofa detector using a database of annotated images containing sofas and many
other classes, as shown in Figure 1. One possibility would be to train the sofa detector using only
the sofa instances. However, this would result in somewhat poor performance due to the limited
size of the training set. An alternative is to build priors about the appearance of object categories
and share information among object models of different classes. In most previous work, transfer of
information between models takes place by imposing some regularization across model parameters.
This is the standard approach both in the discriminative setting [1, 2, 3, 4, 5, 6, 7, 8] and in generative
object models [9, 10, 11, 12, 13, 14].
In this paper, we propose a different approach to transfer information across object categories. Instead of building object models in which we enforce regularization across the model parameters,
we propose to directly share training examples from similar categories. In the example from Figure 1, we can try to use training examples from other classes that are similar enough, for instance
armchairs. We could just add all the armchair examples to the sofa training set. However, not all
instances of armchairs will look close enough to sofa examples to train an effective detector. Therefore, we propose a mechanism to select, among all training examples from other classes, which ones
are closer to the sofa class. We can increase the number of instances that we can borrow by applying
various transformations (e.g., stretching armchair instances horizontally to look closer to sofas). The
transformations will also depend on the viewpoint. For instance, a frontal view of an armchair looks
like a compressed sofa, whereas the side view of an armchair and a sofa often look indistinguishable.
Our approach differs from generating new examples by perturbing examples (e.g., adding mirrored
or rotated versions) from its own class [15]. Rather, these techniques can be combined with ours.
Our approach looks for the set of classes to borrow from, which samples to borrow, and what the best
transformation for each example is. Our work has similarities with three pieces of work on transfer
1
Multiclass Dataset
Armchair
Trained Sofa Model
with Borrowing
True sofa training examples
View point 1
Sofa Detector (View 1)
?
?
View point 2
Bookcase
Sofa Detector (View 2)
?
?
Car
View point 1
?
..
?
View point 2
Sofa
?
?
?
?
?
High weight
?
?
?
?
Low weight
Borrowed set: transformed from other classes ranked by their ?sofa weight?
Figure 1: An illustration of training a sofa detector by borrowing examples from other related classes. Our
model can find (1) good examples to borrow, by learning a weight for each example, and (2) the best transformation for each training example in order to increase the borrowing flexibility. Transformed examples in blue
(or red) box are more similar to the sofa?s frontal (or side) view. Transformed examples, which are selected
according to their learned weights, are trained for sofa together with the original sofa examples. (X on images
indicates that they have low weights to be borrowed)
learning for object recognition. Miller et al. [9] propose a generative model for digits that shares
transformations across classes. The generative model decomposes each model into an appearance
model and a distribution over transformations that can be applied to the visual appearance to generate
new samples. The set of transformations is shared across classes. In their work, the transfer of
information is achieved by sharing parameters across the generative models and not by reusing
training examples. The work by Fergus et al. [16] achieves transfer across classes by learning a
regression from features to labels. Training examples from classes similar to the target class are
assigned labels between +1 and ?1. This is similar to borrowing training examples but relaxing the
confidence of the classification score for the borrowed examples. Wang et al. [17] assign rankings to
similar examples, by enforcing the highest and lowest rankings for the original positive and negative
examples, respectively, and requiring borrowed examples be somewhere in between. Both of these
works rely on a pre-defined similarity metric (e.g. WordNet or aspect based similarity) for deciding
which classes to share with. Our method, on the other hand, learns which classes to borrow from as
well as which examples to borrow within those classes as part of the model learning process.
Borrowing training examples becomes effective when many categories are available. When there
are few and distinct object classes, as in the PASCAL dataset [18], the improvement may be limited.
However, a number of other efforts are under way for building large annotated image databases
with many categories [19, 20, 21]. As the number of classes grows, the number of sets of classes
with similar visual appearances (e.g., the set of truck, car, van, suv, or chair, armchair, swivel chair,
sofa) will increase, and the effectiveness of our approach will grow as well. In our experiments,
we show that borrowing training examples from other classes results in improved performance upon
the current state of the art detectors trained on a single class. In addition, we also show that our
technique can be used in a different but related task. In some cases, we are interested in merging
multiple datasets in order to improve the performance on a particular test set. We show that learning
examples to merge results in better performance than simply combining the two datasets.
2
Learning to Borrow Examples
Consider the challenging problem of detecting and localizing objects from a wide variety of categories such as cars, chairs, and trees. Many current state-of-the-art object detection (and object
recognition) systems use rather elaborate models, based on separate appearance and shape components, that can cope with changes in viewpoint, illumination, shape and other visual properties.
However, many of these systems [22, 23] detect objects by testing sub-windows and scoring corre2
sponding image patches x with a linear function of the form: y = ? > ?(x), where ?(x) represents
a vector of different image features, and ? represents a vector of model parameters.
In this work, we focus on training detection systems for multiple object classes. Our goal is to
develop a novel framework that enables borrowing examples from related classes for a generic object
detector, making minimal assumptions about the type of classifier, or image features used.
2.1
Loss Function for Borrowing Examples
Consider a classification problem where we observe a dataset D = {xi , yi }ni=1 of n labeled training
examples. Each example belongs to one of C classes (e.g. 100 object classes), and each class
c ? C = {1, ..., C} contains a set of nc labeled examples. We let xi ? RD denote the input feature
vector of length D for the training case i, and yi be its corresponding class label. Suppose that
we are also given a separate background class, containing b examples. We further assume a binary
representation for class labels1 , i.e. yi ? C ? {?1}, indicating whether a training example i belongs
to one of the given C classes, or the ?negative? background class2 .
For a standard binary classification problem, a commonly used approach is to minimize:
!
nX
c +b
c
c
min
Loss ? ? xi , sign(yi ) + ?R(? ) ,
c
?
(1)
i=1
where i ranges over the positive and negative examples of the target class c; ? c ? RD is the vector of
unknown parameters, or regression coefficients, for class c; Loss(?) is the associated loss function;
and R(?) is a regularization function for ?.
Now, consider learning which other training examples from the entire dataset D our target class c
could borrow. The key idea is to learn a vector of weights wc of length n + b, such that each wic
would represent a soft indicator of how much class c borrows from the training example xi . Soft
indicator variables wic will range between 0 and 1, with 0 indicating borrowing none and 1 indicating
borrowing the entire example as an additional training instance of class c. All true positive examples
belonging to class c, with yi = c, and all true negative examples belonging to the background class,
with yi = ?1, will have wic = 1, as they will be used fully. Remaining training examples will have
wic between 0 and 1. Our proposed regularization model takes the following form:
!
n+b
X
X
?,c
c
c
?,c
min
min
(1 ? wi )Loss ? ? xi , sign(yi ) + ?R(? ) + ??1 ,?2 (w ) ,
(2)
?,c
c
c?C
?
w
i=1
subject to wic = 1 for yi = ?1 or c, and 0 ? wic ? 1 for all other i, where we defined3 w? = 1 ? w,
and where i ranges over all training examples in the dataset. We further define ?(w? ) as:
X?
?
??1 ,?2 (w? ) = ?1
nl kw(l)
k2 + ?2 kw? k1 ,
(3)
l?C
?
?
where w(l)
represents a vector of weights for class l, with w(l)
= (wj?1 , wj?2 , ? ? ? , wj?n ) for yjm = l.
l
?,c
Here, ?(?) regularizes w using a sparse group lasso criterion [24]. Its first term can be viewed as
an intermediate between the L1 and L2 -type penalty. A pleasing property of L1 -L2 regularization is
that it performs variable selection at the group level. The second term of ?(?) is an L1 -norm, which
keeps the sparsity of weights at the individual level.
The overall objective of Eq (2) and its corresponding regularizer ?(?) have an intuitive interpretation.
The regularization term encourages borrowing all examples as new training instances for the target
class c. Indeed, setting corresponding regularization parameters ?1 and ?2 to high enough values
(i.e. forcing w to be an all 1 vector) would amount to borrowing all examples, which would result
in learning a ?generic? object detector. On the other hand, setting ?1 = ?2 = 0 would recover the
original standard objective of Eq (1), without borrowing any examples. Figure 2b displays learned
wi for 6547 instances to be borrowed by the truck class. Observe that classes that have similar
visual appearances to the target truck class (e.g. van, bus) have wi close to 1 and are grouped
together (compare with Figure 2a, which only uses an L1 norm).
1
This is a standard ?1 vs. all? classification setting.
When learning a model for class c, all other classes can be considered as ?negative? examples. In this
work, for clarity of presentation, we will simply assume that we are given a separate background class.
3
For clarity of presentation, throughout the rest of the paper, we will use the following identity w? = 1?w.
2
3
1
0.8
w
0.6
0.4
0.2
0
w?H(?)
(1?wtruck
)
i
1
0.8
(1?witruck)
w
1
0.8
0.6
0.4
0.2
0
1000
2000
car
3000
4000
5000
6000
van bus
(a) Only with L1 -norm
0
0
1000
2000
car
3000
4000
5000
6000
van bus
(b) Learned by ?(?) without
the Heaviside step function
0.6
0.4
0.2
0
0
1000
2000
car
3000
4000
5000
6000
van bus
(c) Learned by ?(?) with
the Heaviside step function
Figure 2: Learning to borrow for the target truck class: Learned weights wtruck for 6547 instances using
(a) L1 -norm; (b) ?(?) regularization; and (c) ?(?) with symmetric borrowing constraint.
We would also like to point out an analogy between our model and various other transfer learning
models that regularize the ? parameter space [25, 26]. The general form applied to our problem
setting takes the following form:
!
C
X
X
1 X k 2
c
c
c
min
Loss(? ? xi , sign(yi )) + ?R(? ) + ?k? ?
? k2 .
(4)
?c
C
i
c?C
k=1
The model in Eq (4) regularizes all ? to be close to a single mode, C1 k ? k . This can be further
generalized so that ? c is regularized toward one of many modes, or ?super-categories?, as pursued in
[27]. Contrary to previous work, our model from Eq (2) regularizes weights on all training examples,
rather than parameters, across all categories. This allows us to directly learn both: which examples
and what categories we should borrow from. We also note that model performance could potentially
be improved by introducing additional regularization across model parameters.
c
2.2
P
Learning
Solving our final optimization problem, Eq (2), for w and ? jointly is a non-convex problem. We
therefore resort to an iterative algorithm based on the fact that solving for ? given w and for w given
? are convex problems. The algorithm will iterate between (1) solving for ? given w based on [22],
and (2) solving for w given ? using the block coordinate descent algorithm [28] until convergence.
We initialize the model by setting wic to 1 for yi = c and yi = ?1, and to 0 for all other training
examples. Given this initialization, the first iteration is equivalent to solving C separate binary
classification problems of Eq (1), when there is no borrowing4
Even though most irrelevant examples have low borrowing indicator weights wi , it is ideal to clean
up these noisy examples. To this end, we introduce a symmetric borrowing constraint: if a car class
does not borrow examples from chair class, then we would also like for the chair class not to borrow
examples from the corresponding car class. To accomplish this, we multiply wic by H(w
?cyi ? ),
where H(?) is the Heaviside step function. We note that wic refers to the weight of example xi to
be borrowed by the target class c, whereas w
?cyi refers to the average weight of examples that class
yi borrows from the target class c. In other words, if the examples that class yi borrows from class
c have low weights on average (i.e. w
?cyi < ), then class c will not borrow example xi , as this
indicates that classes c and yi may not be similar enough. The resulting weights after introducing
this symmetric relationship are shown in Figure 2c.
3
Borrowing Transformed Examples
So far, we have assumed that each training example is borrowed as is. Here, we describe how we
apply transformations to the candidate examples during the training phase. This will allow us to
borrow from a much richer set of categories such as sofa-armchair, cushion-pillow, and car-van.
There are three different transformations we employ: translation, scaling, and affine transformation.
Translation and scaling: Translation and scaling are naturally inherited into existing detection
systems during scoring. Scaling is resolved by scanning windows at multiple scales of the image,
which typical sliding-window detectors already do. Translation is implemented by relaxing the
location of the ground-truth bounding box Bi . Similar to Felzenszwalb et al. [22]?s approach of
finding latent positive examples, we extract xi from multiple boxes that have a significant overlap
with Bi , and select a candidate example that has the smallest Loss(? c ? xi , sign(yi )).
4
In this paper, we iterate only once, as it was sufficient to borrow similar examples (see Figure 2).
4
Original Class
Truck
Shelves
Car
Desk lamp
Toilet
Without transformation
Borrowed Classes AP improvement
car, van
+7.14
bookcase
+0.17
truck, van
+1.07
?
N/A
?
N/A
With transformation
Borrowed Classes AP improvement
car, van
+9.49
bookcase
+4.73
truck, van, bus
+1.78
floor lamp
+0.30
sink, cup
-0.68
Table 1: Learned borrowing relationships: Most discovered relations are consistent with human subjective
judgment. Classes that were borrowed only with transformations are shown in bold.
Affine transformation: We also change aspect ratios of borrowed examples so that they look more
alike (as in sofa-armchair and desk lamp-floor lamp). Our method is to transform training examples
to every canonical aspect ratio of the target class c, and find the best candidate for borrowing. The
canonical aspect ratios can be determined by clustering aspect ratios of all ground-truth bounding
boxes [22], or based on the viewpoints, provided we have labels for each viewpoint. Specifically,
suppose that there is a candidate example xi to be borrowed by the target class c and there are L
canonical aspect ratios of c. We transform xi into xli by resizing one dimension so that {xli }0?l?L
contains all L canonical aspect ratios of c (and x0i = xi ). In order to ensure that only one candidate is
generated from xi , we select a single transformed example xli , for each i, that minimizes Loss(? c ?
xli , sign(yi )). Note that this final candidate can be selected during every training iteration, so that
the best selection can change as the model is updated.
Figure 1 illustrates the kind of learning our model performs. To borrow examples for sofa, each
example in the dataset is transformed into the frontal and side view aspect ratios of sofa. The
transformed example that has the smallest Loss(?) is selected for borrowing. Each example is then
assigned a borrowing weight using Eq (2). Finally, the new sofa detector is trained using borrowed
examples together with the original sofa examples. We refer the detector trained without affine
transformation as the borrowed-set detector, and the one trained with affine transformation as the
borrowed-transformed detector.
4
Experimental Results
We present experimental results on two standard datasets: the SUN09 dataset [21] and the PASCAL
VOC 2007 challenge [18]. The SUN09 dataset contains 4,082 training images and 9,518 testing
images. We selected the top 100 object categories according to the number of training examples.
These 100 object categories include a wide variety of classes such as bed, car, stool, column, and
flowers, and their distribution is heavy tailed varying from 1356 to 8 instances. The PASCAL dataset
contains 2,051 training images and 5,011 testing images, belonging to 20 different categories. For
both datasets, we use the PASCAL VOC 2008 evaluation protocol [18]. During the testing phase,
in order to enable a direct comparison between various detectors, we measure the detection score of
class c as the mean Average Precision (AP) score across all positive images that belong to class c
and randomly sub-sampled negative images, so that the ratio between positive and negative examples
remains the same across all classes.
Our experiments are based on one of the state-of-art detectors [22]. Following [22], we use a hinge
loss for Loss(?) and a squared L2 -norm for R(?) in Eq (2), where every detector contains two root
components. There are four controllable parameters: ?, ?1 , ?2 , and (see Eq (2)). We used the
same ? as in [22]. ?1 and ?2 were picked based on the validation set, and was set to 0.6. In order
to improve computation time, we threshold each weight wi so that it will either be 0 or 1.
We perform two kinds of experiments: (1) borrowing examples from other classes within the same
dataset, and (2) borrowing examples from the same class that come from a different dataset. Both
experiments require identifying which examples are beneficial to borrow for the target class.
4.1
Borrowing from Other Classes
We first tested our model to identify a useful set of examples to borrow from other classes in order
to improve the detection quality on the SUN09 dataset. A unique feature of the SUN09 dataset is
that all images were downloaded from the internet without making any effort to create a uniform
distribution over object classes. We argue that this represents a much more realistic setting, in which
some classes contain a lot of training data and many other classes contain little data.
5
(a) Shelves for Bookcase
?
(b) Chair for Swivel chair
?
Highest w
Lowest w
Figure 3: Borrowing Weights: Examples are ranked by learned weights, w: (a) shelves examples to be
10
10
600
8
8
500
400
300
200
6
4
2
0
0
5
10
15
20
Index ofIndex
class
Object
25
30
35
?4
0
6
4
2
0
?2
?2
100
0
AP improvements
700
AP improvements
of instances
Number ofNumber
Training
Examples
borrowed by the bookcase class and (b) chair examples to be borrowed by the swivel chair class. Both show
that examples with higher w are more similar to the target class. (green: borrowed, red: not borrowed)
5
10
15
Object Index
20
25
?4
0
5
10
15
20
Object Index
25
30
35
(a) Number of examples
(b) Borrowed-set
(c) Borrowed-transformed
before/after borrowing
AP improvements
AP improvements
Figure 4: (a) Number of examples used for training per class before borrowing (blue) and after borrowing
(red). Categories with fewer examples tend to borrow more examples. AP improvements (b) without and (c)
with transformations, compared to the single detector trained only with the original examples. Note that our
model learned to borrow from (b) 28 classes, and (c) 37 classes.
Among 100 classes, our model learned that there are 28 and 37 classes that can borrow from other
classes without and with transformations, respectively. Table 1 shows some of the learned borrowing
relationships along with their improvements. Most are consistent with human subjective judgment.
Interestingly, our model excluded bag, slot machine, flag, and fish, among others, from borrowing.
Many of those objects have quite distinctive visual appearances compared to other object categories.
Figure 3 shows borrowed examples along with their relative orders according to the borrowing indicator weights, wi . Note that our model learns quite reliable weights: for example, chair examples
in green box are similar to the target swivel chair class, whereas examples in red box are either
occluded or very atypical.
Figure 4 further displays AP improvements of the borrowed-set and borrowed-transformed detectors, against standard single detectors. Observe that over 20 categories benefit in various degrees
from borrowing related examples. Among borrowed-transformed detectors, the categories with the
largest improvements are truck (9.49), picture (7.54), bus (7.32), swivel chair (6.88), and bookcase
(5.62). We note that all of these objects borrow visual appearance from other related frequent objects, including car, chair, and shelves. Five objects with the largest decrease in AP include plate (3.53), fluorescent tube (-3.45), ball (-3.21), bed (-2.69), and microwave (-2.52). Model performance
often deteriorates when our model discovers relationships that are not ideal (e.g. toilet borrowing
cup and sink; plate borrowing mug).
Table 2 further breaks down borrowing rates as a function of the number of training examples, where
a borrowing rate is defined as the ratio of the total number of borrowed examples to the number of
original training examples. Observe that borrowing rates are much higher when there are fewer
training examples (see also Figure 4a). On average, the borrowed-set detectors borrow 75% of
the total number of original training examples, whereas the borrowed-transformed detectors borrow
about twice as many examples, 149%.
Table 3 shows AP improvements of our methods. Borrowed-set improve 1.00 and borrowedtransformed detectors improve 1.36. This is to be expected as introducing transformations allows us
to borrow from a much richer set of object classes. We also compare to a baseline approach, which
6
single
countertop
transformed
swivel chair
single
(b)
transformed
(a)
Figure 5: Detection results on random images containing the target class. Only the most confident detection
is shown per image. For clearer visualization, we do not show images where both detectors have large overlap. Our detectors (2nd/4th row) show better localizations than single detectors (1st/3rd row). (red: correct
detection, yellow: false detection)
Number of Training Examples
Borrowed-set
Borrowed-Transformed
1-30
1.69
2.75
31-50
0.48
2.57
51-100
0.43
0.94
101-150
0.48
0.81
> 150
0.13
0.17
ALL
0.75
1.49
Table 2: Borrowing rates for the borrowed-set and borrowed-transformed models. Borrowing rate is defined
as the ratio of the number of borrowed examples to the number of original examples.
Methods
AP without borrowing
AP improvements
Borrowed-set
14.99
+1.00
All examples from the same classes
16.59
+0.30
Borrowed-Transformed
16.59
+1.36
Table 3: AP improvements of the borrowed-set and borrowed-transformed detectors. We also compared
borrowed-transformed method against the baseline approach borrowing all examples, without any selection
of examples, from the same classes our method borrows from. 2nd row shows the average AP score of the
detectors without any borrowing in the classes used for borrowed-set or borrowed-transformed.
uses all examples in the borrowed classes of borrowed-transformed method. For example, if class A
borrows some examples from class B and C using borrowed-transformed method, then the baseline
approach uses all examples from class A, B, and C without any selection. Note that this baseline
approach improves only 0.30 compared to 1.36 of our method.
Finally, Figure 5 displays detection results. Single and borrowed-transformed detections are visualized on test images, chosen at random, that contain the target class. In many cases, transformed
detectors are better at localizing the target object, even when they fail to place a bounding box around
the full object. We also note that borrowing similar examples tends to introduce some confusions
between related object categories. However, we argue that this type of failure is much more tolerable
compared to the single detector, which often has false detections of completely unrelated objects.
4.2
Borrowing from Other Datasets
Combining datasets is a non-trivial task as different datasets contain different biases. Consider
training a car detector that is going to be evaluated on the PASCAL dataset. The best training set for
such a detector would be the dataset provided by the PASCAL challenge, as both the training and test
sets come from the same underlying distribution. In order to improve model performance, a simple
mechanism would be to add additional training examples. For this, we could look for other datasets
that contain annotated images of cars ? for example, the SUN09 dataset. However, as the PASCAL
and SUN09 datasets come with different biases, many of the training examples from SUN09 are
not as effective for training when the detector is evaluated on the PASCAL dataset ? a problem that
was extensively studied by [29]. Here, we show that, instead of simply mixing the two datasets, our
model can select a useful set of examples from the SUN09 for the PASCAL dataset, and vice-versa.
7
(a)
(b)
Random Orders
?
(c)
Highest w
Lowest w
Figure 6: SUN09 borrowing PASCAL examples: (a) Typical SUN09 car images, (b) Typical PASCAL car
images, (c) PASCAL car images sorted by learned borrowing weights. (c) shows that examples are sorted from
canonical view points (left) to atypical or occluded examples (right). (green: borrowed, red: not borrowed)
SUN09 PASCAL SUN09
SUN09
PASCAL SUN09 PASCAL
PASCAL
only
only +PASCAL +borrow PASCAL
only
only +SUN09 +borrow SUN09
car 43.31 39.47
43.64
45.88
car
49.58 40.81 49.91
51.00
person 45.46 28.78
46.46
46.90
person 23.58 22.31 26.05
27.05
sofa 12.96 11.97
12.86
15.25
sofa
19.91 13.99 20.01
22.17
chair 18.82 13.84
18.18
20.45
chair 14.23 14.20 19.06
18.55
mean 30.14 23.51
30.29
32.12
mean 26.83 22.83 28.76
29.69
Diff.
-6.63
+0.15
+1.98
Diff.
-4.00 +1.93
+2.86
(a) Testing on the SUN09 dataset
(b) Testing on the PASCAL 2007 dataset
Table 4: Borrowing from other datasets: AP scores of various detectors: ?SUN09 only? and ?PASCAL
only? are trained using the SUN09 dataset [21] and the PASCAL dataset [18] without borrowing any examples.
?SUN09+PASCAL? is trained using positive examples from both SUN09 and PASCAL. and negative examples
from the target dataset. ?PASCAL+borrow SUN09? and ?SUN09+borrow PASCAL? borrow selected examples
from another dataset for each target dataset using our method. The last Diff row shows AP improvements over
the ?standard? state-of-art detector trained on the target dataset (column 1).
Figure 6 shows the kind of borrowing our model performs. Figure 6a,b display typical car images
from the SUN09 and PASCAL datasets. Compared to SUN09, PASCAL images display a much
wider variety of car types, with different viewpoints and occlusions. Figure 6c further shows the
ranking of PASCAL examples by wiSUN09 car for i ? DPASCAL . Observe that images with high w
match the canonical representations of SUN09 images much better compared to images with low w.
Table 4 shows performances of four detectors. Observe that detectors trained on the target dataset
(column 1) outperform ones trained using another dataset (column 2). This shows that there exists
a significant difference between the two datasets, which agrees with previous work [29]. Next, we
tested detectors by simply combining positive examples from both datasets and using negative examples from the target dataset (column 3). On the SUN09 test set, the improvement was not significant,
and on the PASCAL test set, we observed slight improvements. Detectors trained by our model
(column 4) substantially outperformed single detectors as well as ones that were trained mixing the
two datasets. The detectors (columns 1 and 2) were trained using the state-of-art algorithm [22].
5
Conclusion
In this paper we presented an effective method for transfer learning across object categories. The
proposed approach consists of searching similar object categories using sparse grouped Lasso framework, and borrowing examples that have similar visual appearances to the target class. We further
demonstrated that our method, both with and without transformation, is able to find useful object
instances to borrow, resulting in improved accuracy for multi-class object detection compared to the
state-of-the-art detector trained only with examples available for each class.
Acknowledgments: This work is funded by ONR MURI N000141010933, CAREER Award No.
07471 20, NSERC, and NSF Graduate Research Fellowship.
References
[1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, November 1998.
8
[2] S. Krempp, D. Geman, and Y. Amit. Sequential learning of reusable parts for object detection. Technical
report, CS Johns Hopkins, 2002.
[3] A. Torralba, K. P. Murphy, and W. T. Freeman. Sharing features: efficient boosting procedures for multiclass object detection. In CVPR, 2004.
[4] E. Bart and S. Ullman. Cross-generalization: learning novel classes from a single example by feature
replacement. In CVPR, 2005.
[5] A. Opelt, A. Pinz, and A. Zisserman. Incremental learning of object detectors using a visual shape
alphabet. In CVPR, 2006.
[6] K. Levi, M. Fink, and Y. Weiss. Learning from a small number of training examples by exploiting object
categories. In Workshop of Learning in Computer Vision, 2004.
[7] A. Quattoni, M. Collins, and T.J. Darrell. Transfer learning for image classification with sparse prototype
representations. In CVPR, 2008.
[8] C.H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class
attribute transfer. In CVPR, 2009.
[9] E. Miller, N. Matsakis, and P. Viola. Learning from one example through shared densities on transforms.
In CVPR, 2000.
[10] L. Fei-Fei, R. Fergus, and P. Perona. A bayesian approach to unsupervised one-shot learning of object
categories. In ICCV, 2003.
[11] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: an
incremental bayesian approach tested on 101 object categories. In IEEE. Workshop on GMBV, 2004.
[12] E. Sudderth, A. Torralba, W. T. Freeman, and W. Willsky. Learning hierarchical models of scenes, objects,
and parts. In ICCV, 2005.
[13] J. Sivic, B.C. Russell, A. Zisserman, W.T. Freeman, and A.A. Efros. Unsupervised discovery of visual
object class hierarchies. In CVPR, 2008.
[14] E. Bart, I. Porteous, P. Perona, and M. Welling. Unsupervised learning of visual taxonomies. In CVPR,
2008.
[15] D.M. Gavrila and J. Giebel. Virtual sample generation for template-based shape matching. In CVPR,
2001.
[16] R. Fergus, H. Bernal, Y. Weiss, and A. Torralba. Semantic label sharing for learning with many categories.
In ECCV, 2010.
[17] Gang Wang, David Forsyth, and Derek Hoiem. Comparative object similarity for improved recognition
with few or no examples. In CVPR, 2010.
[18] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object
classes (voc) challenge. International Journal of Computer Vision, 88(2):303?338, June 2010.
[19] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. LabelMe: a database and web-based tool
for image annotation. IJCV, 77(1-3):157?173, 2008.
[20] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image
database. In CVPR, 2009.
[21] Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Largescale scene recognition from abbey to zoo. In CVPR, 2010.
[22] P.F. Felzenszwalb, R.B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part-based models. TPAMI, 32(9):1627 ?1645, 2010.
[23] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[24] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of
the Royal Statistical Society, Series B, 68:49?67, 2006.
[25] Theodoros Evgeniou and Massimiliano Pontil. Regularized multi?task learning. In ACM SIGKDD, 2004.
[26] T. Tommasi, F. Orabona, and B. Caputo. Safety in numbers: Learning categories from few examples with
multi model knowledge transfer. In CVPR, 2011.
[27] R. Salakhutdinov, A. Torralba, and J. Tenenbaum. Learning to share visual appearance for multiclass
object detection. In CVPR, 2011.
[28] Jerome Friedman, Trevor Hastie, , and Robert Tibshirani. A note on the group lasso and a sparse group
lasso. Technical report, Department of Statistics, Stanford University, 2010.
[29] A. Torralba and A. Efros. Unbiased look at dataset bias. In CVPR, 2011.
9
| 4461 |@word version:1 dalal:1 norm:5 nd:2 everingham:1 triggs:1 shot:1 contains:5 score:5 series:1 hoiem:1 ours:1 interestingly:1 document:1 subjective:2 existing:1 bookcase:6 current:3 john:1 realistic:1 shape:4 enables:1 v:1 bart:2 generative:5 selected:5 pursued:1 fewer:2 lamp:4 detecting:1 boosting:1 toronto:2 location:1 theodoros:1 five:1 along:2 direct:1 become:1 yuan:1 consists:1 ijcv:1 introduce:2 expected:1 indeed:1 multi:3 salakhutdinov:2 voc:3 freeman:4 ming:1 little:1 window:3 becomes:1 provided:2 unrelated:1 underlying:1 lowest:3 what:2 kind:3 minimizes:1 substantially:1 finding:1 transformation:20 every:3 fink:1 classifier:1 k2:2 ramanan:1 positive:8 before:2 safety:1 tends:1 despite:1 merge:1 ap:17 twice:1 initialization:1 studied:1 challenging:2 relaxing:2 limited:2 range:3 bi:2 graduate:1 unique:1 acknowledgment:1 lecun:1 testing:6 harmeling:1 block:1 differs:1 digit:1 procedure:1 pontil:1 matching:1 confidence:1 pre:1 refers:2 word:1 close:3 selection:5 applying:1 equivalent:1 demonstrated:1 williams:1 convex:2 identifying:1 borrow:32 regularize:1 searching:1 coordinate:1 updated:1 target:23 suppose:2 hierarchy:1 us:3 trend:1 recognition:5 muri:1 database:5 labeled:2 observed:1 geman:1 wang:2 utstat:1 wj:3 sun:1 decrease:1 highest:3 russell:2 transforming:1 pinz:1 occluded:2 trained:17 depend:1 solving:5 upon:2 distinctive:1 localization:1 toilet:2 completely:1 sink:2 resolved:1 various:5 regularizer:1 alphabet:1 train:2 distinct:1 massimiliano:1 effective:4 describe:1 quite:2 richer:2 stanford:1 cvpr:16 compressed:1 resizing:1 statistic:2 unseen:1 transform:3 jointly:1 noisy:1 final:2 tpami:1 propose:5 frequent:1 combining:3 mixing:2 flexibility:1 intuitive:1 bed:2 exploiting:1 convergence:1 darrell:1 comparative:1 generating:1 incremental:2 rotated:1 object:52 wider:1 bernal:1 develop:1 clearer:1 augmenting:1 x0i:1 borrowed:49 eq:9 implemented:1 c:1 come:3 annotated:3 correct:1 attribute:1 human:3 enable:1 mcallester:1 virtual:1 require:1 assign:1 generalization:1 around:1 considered:1 ground:2 deciding:1 efros:2 achieves:1 torralba:9 smallest:2 abbey:1 ruslan:1 estimation:1 outperformed:1 sofa:29 bag:1 label:5 grouped:3 largest:2 vice:1 create:1 agrees:1 tool:1 mit:4 super:1 rather:3 shelf:4 varying:1 focus:1 june:1 improvement:17 indicates:2 sigkdd:1 baseline:4 detect:2 entire:2 borrowing:52 relation:1 perona:3 transformed:25 going:1 interested:1 overall:1 among:5 classification:6 pascal:30 art:7 initialize:1 once:1 evgeniou:1 cyi:3 represents:4 kw:2 look:8 unsupervised:3 others:1 report:2 few:5 employ:1 randomly:1 oriented:1 individual:1 murphy:2 phase:2 occlusion:1 replacement:1 friedman:1 pleasing:1 detection:21 possibility:1 multiply:1 evaluation:1 nl:1 microwave:1 closer:2 tree:1 girshick:1 minimal:1 instance:15 column:7 soft:2 localizing:2 introducing:3 uniform:1 scanning:1 accomplish:1 nickisch:1 combined:1 confident:1 swivel:6 st:1 armchair:10 person:2 density:1 international:1 csail:4 dong:1 together:3 hopkins:1 squared:1 tube:1 containing:3 resort:1 ullman:1 reusing:1 li:2 bold:1 coefficient:1 forsyth:1 ranking:3 piece:1 try:1 view:11 root:1 picked:1 lot:1 break:1 red:6 recover:1 inherited:1 annotation:1 minimize:1 ni:1 accuracy:1 stretching:1 miller:2 judgment:2 identify:1 yellow:1 xli:4 bayesian:2 none:1 zoo:1 n000141010933:1 detector:46 quattoni:1 sharing:3 trevor:1 against:2 failure:1 derek:1 james:1 naturally:1 associated:1 sampled:1 dataset:31 lim:2 car:24 improves:2 knowledge:1 higher:2 zisserman:3 improved:4 wei:2 evaluated:2 box:7 though:1 just:1 cushion:1 until:1 jerome:1 hand:2 web:1 lack:1 mode:2 quality:1 grows:1 aude:1 building:3 requiring:1 true:3 contain:5 unbiased:1 regularization:9 assigned:2 excluded:1 symmetric:3 semantic:1 mug:1 indistinguishable:1 during:4 encourages:1 criterion:1 generalized:1 plate:2 demonstrate:1 confusion:1 performs:3 l1:6 image:30 novel:3 discovers:1 perturbing:1 belong:1 interpretation:1 slight:1 significant:3 refer:1 cup:2 imposing:1 versa:1 stool:1 rd:3 giebel:1 funded:1 similarity:4 add:2 own:1 recent:1 belongs:2 irrelevant:1 forcing:1 certain:1 hay:1 binary:3 onr:1 yi:17 scoring:2 additional:3 somewhat:1 floor:2 deng:1 sliding:1 multiple:4 full:1 technical:2 match:1 cross:1 lin:1 award:1 regression:3 oliva:1 vision:2 metric:1 iteration:2 sponding:1 represent:1 histogram:1 achieved:1 c1:1 whereas:4 addition:1 background:4 fellowship:1 winn:1 grow:1 sudderth:1 rest:1 subject:1 tend:1 gavrila:1 contrary:1 effectiveness:1 ideal:2 intermediate:1 bengio:1 enough:4 variety:3 iterate:2 hastie:1 lasso:4 idea:1 prototype:1 haffner:1 multiclass:4 sun09:28 whether:1 tommasi:1 effort:2 penalty:1 antonio:2 useful:3 amount:1 transforms:1 desk:2 extensively:1 tenenbaum:1 visualized:1 category:25 generate:1 outperform:1 exist:1 mirrored:1 canonical:6 nsf:1 fish:1 sign:5 deteriorates:1 per:2 tibshirani:1 blue:2 group:4 key:1 four:2 reusable:1 threshold:1 levi:1 clarity:2 clean:1 place:2 throughout:1 patch:1 scaling:4 internet:1 display:5 truck:8 gang:1 constraint:2 fei:6 scene:2 wc:1 aspect:8 chair:16 min:4 department:2 according:3 ball:1 poor:1 belonging:3 across:12 beneficial:1 increasingly:1 wi:6 joseph:1 rsalakhu:1 making:2 alike:1 iccv:2 visualization:1 bus:6 remains:1 labels1:1 mechanism:2 fail:1 end:1 available:2 apply:1 observe:6 hierarchical:2 enforce:1 generic:2 tolerable:1 alternative:1 matsakis:1 original:9 top:1 remaining:1 clustering:1 ensure:1 include:2 porteous:1 hinge:1 somewhere:1 k1:1 build:1 amit:1 corre2:1 society:1 objective:2 already:1 gradient:2 separate:4 suv:1 nx:1 argue:2 trivial:1 toward:1 enforcing:1 willsky:1 length:2 index:3 relationship:4 illustration:1 ratio:10 nc:1 robert:1 potentially:1 taxonomy:1 negative:9 unknown:1 perform:1 datasets:16 descent:1 november:1 regularizes:3 viola:1 yjm:1 discovered:1 david:1 class2:1 imagenet:1 sivic:1 learned:11 able:1 flower:1 sparsity:1 challenge:3 green:3 reliable:1 including:1 royal:1 gool:1 overlap:2 ranked:2 rely:1 regularized:2 indicator:4 largescale:1 improve:6 picture:1 extract:1 prior:1 l2:3 discovery:1 relative:1 loss:11 fully:1 gmbv:1 discriminatively:1 generation:1 fluorescent:1 analogy:1 borrows:5 validation:1 downloaded:1 degree:1 affine:4 sufficient:1 consistent:2 xiao:1 wic:9 viewpoint:5 share:5 heavy:1 translation:4 row:4 eccv:1 last:1 side:3 allow:1 bias:3 opelt:1 wide:2 template:1 felzenszwalb:2 sparse:4 van:11 benefit:1 overcome:1 dimension:1 pillow:1 commonly:1 far:1 cope:1 welling:1 krempp:1 keep:1 assumed:1 discriminative:1 fergus:4 xi:14 iterative:1 latent:1 decomposes:1 tailed:1 table:8 learn:2 transfer:11 controllable:1 career:1 caputo:1 bottou:1 protocol:1 bounding:3 lampert:1 elaborate:1 ehinger:1 precision:1 sub:2 candidate:6 atypical:2 learns:3 down:1 krista:1 exists:1 workshop:2 socher:1 false:2 adding:1 merging:1 sequential:1 illumination:1 illustrates:1 simply:4 appearance:10 visual:13 horizontally:1 nserc:1 truth:2 acm:1 slot:1 goal:1 viewed:1 presentation:2 identity:1 sorted:2 orabona:1 shared:2 labelme:1 change:3 typical:4 determined:1 specifically:1 diff:3 wordnet:1 flag:1 total:2 experimental:3 indicating:3 select:4 collins:1 jianxiong:1 frontal:3 heaviside:3 tested:3 |
3,824 | 4,462 | Orthogonal Matching Pursuit with Replacement
Prateek Jain
Microsoft Research India
Bangalore, INDIA
[email protected]
AmbujTewari
The University of Texas at Austin
Austin, TX
[email protected]
Inderjit S. Dhillon
The University of Texas at Austin
Austin, TX
[email protected]
Abstract
In this paper, we consider the problem of compressed sensing where the goal is to recover all sparse
vectors using a small number offixed linear measurements. For this problem, we propose a novel
partial hard-thresholding operator that leads to a general family of iterative algorithms. While one
extreme of the family yields well known hard thresholding algorithms like ITI and HTP[17, 10], the
other end of the spectrum leads to a novel algorithm that we call Orthogonal Matching Pursnit with
Replacement (OMPR). OMPR, like the classic greedy algorithm OMP, adds exactly one coordinate
to the support at each iteration, based on the correlation with the current residnal. However, unlike
OMP, OMPR also removes one coordinate from the support. This simple change allows us to prove
that OMPR has the best known guarantees for sparse recovery in terms of the Restricted Isometry
Property (a condition on the measurement matrix). In contrast, OMP is known to have very weak
performance guarantees under RIP. Given its simple structore, we are able to extend OMPR using
locality sensitive hashing to get OMPR-Hasb, the first provably sub-linear (in dimensionality) algorithm for sparse recovery. Our proof techniques are novel and flexible enough to also permit the
tightest known analysis of popular iterative algorithms such as CoSaMP and Subspace Pursnit. We
provide experimental results on large problems providing recovery for vectors of size up to million
dimensions. We demonstrste that for large-scale problems our proposed methods are more robust
and faster than existing methods.
1 Introduction
We nowadays routinely face high-dimensional datasets in diverse application areas such as biology, astronomy, and
finance. The associated curse of dimensionality is often alleviated by prior knowledge that the object being estimsted
has some structore. One of the most natorsl and well-stodied structural assumption for vectors is sparsity. Accordingly,
a huge amount of recent work in machine learning, statistics and signal processing has been devoted to finding better
ways to leverage sparse structures. Compressed sensing, a new and active branch of modem signal processing, deals
with the problem of designing measurement matrices and recovery algorithms, such that almost all sparse signals can
be recovered from a smalI number of measurements. It has important applications in imsging, computer vision and
machine learning (see, for example, [9,24, 14]).
In this paper, we focus on the compressed sensing setting [3, 7] where we want to design a measurement matrix
A E R=xn such that a sparse vector x* E R n with Ilx*llo := IBUpp(X*)I ::; k < n can be efficiently recovered from
the measurements b = Ax* E R=. Initial work focused on various random ensembles of matrices A such that, if A
was chosen randomly from that ensemble, one would be able to recover all or almost all sparse vectors x* from Ax*.
Candes and Tao[3] isolated a key property called the restricted Isometry property (RIP) and proved that, as long as the
measurement matrix A satisfies RIP, the true sparse vector can be obtained by solving an i,-optimization problem,
min Ilxll, S.t. Ax
=
b.
The above problem can be easily formulated as a linear program and is hence efficiently solvable. We recall for the
reader that a matrix A is said to satisfY RIP of order k if there is some Ok E 10,1) such that, for all x with Ilxllo ::; k,
we have
I
Several random matrix ensembles are known to satisfY 00> < {} with high probability provided one chooses
m ~ 0 (~ log ~) measurements. It was shown in [2] that i,-minimization recovers all k-sparse vectors provided A
satisfies t.k < 0.414 although the conditioohas been recently intproved to 02k < 0.473 [11]. Note that, in compressed
sensing, the goal is to recover all, or most, k-sparse signals using the same measurement matrix A. Hence, weaker
cooditioos such as restricted coovexity [20] studied in the statistical literature (where the aint is to recover a single
sparse vector from noisy linear measurements) typically do not suffice. In fact, if RIP is not satisfied then multiple
sparse vectors x can lead to the sante observatioo b, hence making recovery of the true sparse vector intpossible.
Based on its RIP guarantees, i,-minimizatioo can guarantee recovery using just O(k log(n/ k?) measurements, but it
has been observed in practice that i,-minimization is too expensive in large scale applications [8], for example, when
the dimensionality is in the millions. This has sparked a huge interest in other iterative methods for sparse recovery.
An early classic iterative method is Orthogooal Matching Pursuit (OMP) [21, 6] that greedily chooses elements to add
to the support. It is a natural, easy-to-intplement and fast method but unfortuoately lacks stroug theoretical guarantees.
Indeed, it is known that, if run for k iterations, OMP cannot uoiformly recover all k-sparse vectors assumiug RIP
cooditioo of the form 02k :'0 IJ [22, 18]. However, Zhang [26] showed that OMP, if run for 30k iterations, recovers the
optimal solution when 03'k :'0 1/3; a significantly more restrictive cooditioo than the ones required by other methods
like i,-minimization.
Several other iterative approaches have been proposed that include Iterative Soft Thresholding (1ST) [17], Iterative
Hard Thresholding (!BT) [I], Compressive Santpling Matching Pursuit (CoSaMP) [19], Subspace Pursuit (SP) [4],
Iterative Thresholding with Inversion (IT!) [16], Hard Thresholding Pursuit (HTP) [10] and many others. In the family
ofiterative hard thresholding algorithms, we can identifY two major subfamilies [17]: one- and two-stage algorithms.
As their nantes suggest, the distiuctioo is based on the number of stages in each iteration of the algorithm. One-stage
algorithms such as IHT, m and HTP, decide on the choice of the next support set and then usually solve a least
squares problem on the updated support. The one-stage methods always set the support set to have size k, where k
is the target sparsity level. On the other hand, two-stage algorithms, notable examples being CoSaMP and SP, first
enlarge the support set, solve a least squares 00 it, and then reduce the support set back again to the desired size. A
secood least squares problem is then solved 00 the reduced support. These algorithms typically enlarge and reduce
the support set by k or 2k elements. An exceptioo is the two-stage algorithm FoBa [25] that adds and removes single
elements from the support. However, it differs from our proposed methods as its analysis requires very restrictive RIP
cooditioos (08k < 0.1 as quoted in [14]) and the connection to locality sensitive hashing (see below) is not made.
Another algorithm with replacentent steps was studied by Shalev-Shwartz et al. [23]. However, the algorithm and the
settiug under which it is analyzed are different from ours.
In this paper, we present and provide a unified analysis for a family of one-stage iterative hard thresholding algorithms.
The family is parameterized by a positive integer I :'0 k. At the extrente value I ~ k, we recover the algorithm ITIIHTP.
At the other extrente k ~ 1, we get a novel algorithm that we call Orthogonal Matching Pursuit with Replacement
(OMPR). OMPR can be thought of as a sintple modification of the classic greedy algorithm OMP: instead of sintply
adding an element to the existiug support, it replaces an existiug support element with a new one. Surprisingly, this
change allows us to prove sparse recovery under the condition 02k < 0.499. This is the best 02k based RIP condition
under which any method, including i, -minimization, is (currently) known to provably perform sparse recovery.
OMPR also lends itself to a faster intplententatioo using locality sensitive hashing (LSH). This allows us to provide
recovery guarantees using an algorithm whose run-time is provably sub-linear in n, the number of dimensions. An
added advantage of OMPR, unlike many iterative methods, is that no careful tuning of the step-size parameter is
required even under noisy settiugs or even when RIP does not hold. The default step-size of 1 is always guaranteed to
converge to at least a local optimum.
Finally, we show that our proof techniques used in the analysis of the OMPR family are useful in tightening the
analysis of two-stage algorithms, such as CoSaMP and SP, as well. As a result, we are able to prove better recovery
guarantees for these algorithms: 04k < 0.35 for CoSaMP, and 03k < 0.35 for SP. We hope that this unified analysis
sheds more light on the interrelationships between the various kinds of iterative hard thresholding algorithms.
In summary, the contributions of this paper are as follows .
? We present a family of iterative hard thresholding algorithms that on one end of the spectrum includes existing methods such as ITIIHTP while on the other end gives OMPR. OMPR is an intproventent over the
classical OMP method as it enjoys better theoretical guarantees and is also better in practice as shown in our
experiments .
? Unlike other intprovements over OMP, such as CoSaMP or SP, OMPR changes ouly ooe elentent of the
support at a tinte. This allows us to use Locality Sensitive Hashing (LSH) to speed it up resultiug in the first
provably sub-linear (in the ambient dimensionality n) time sparse recovery algorithm.
2
Algorithm 2 OMPR (I)
1: Input: matrix A, vector b, sparsity level k
2: Parameter: step size 1/ > 0, replacement budget 1
3: Initialize Xl S.t I supp(xl)1 = k, h = supp(x l )
4: fort = ltoTdo
5:
zHI <- x' + 1/AT(b - Ax')
6:
tOPHI <- indices of top 1 elements of Iz};'"11
7:
J'+1 <- I, U tOPHI
Algorithm 1 OMPR
1: Input: matrix A, vector b, sparsity level k
2: Parameter: s1ep size 1/ > 0
3: Initialize Xl S.t Isupp(xl)1 = k, h = supp(XI)
4: for t = 1 to T do
5:
zHI <- x' + 1/AT(b - Ax')
I HII
.
6:
Jt+l +- argmaxj~It Zj
J'+1
I, U {iHI}
7:
8:
yt+l +- H
9:
It+1
HI
10:
x [Hl
11: end for
<-
<-
k
(zt+l)
+l
Jt
supp(y'+1)
HI
A It+l \b , x it+l
+-
+-
yt+l
9:
IHI <- supp(yHI)
+-
XHI
I t +1
11: end for
0
10:
<-
Hk
(z~~:J
8:
A It+l'
\b x'.+1
1t+l
<-
0
? We provide a general proof for all the algorithms in our partial hard thresholding based family. In particular,
we can guarantee recovery using OMPR, under both noiseless and noisy settings, provided 02' < 0.499.
This is the least restrictive 02. cooditioo under which any efficient sparse recovery method is known to work.
Furthermore, our proof technique can be used to provide a general theorem that provides the least restrictive
known guarantees for all the two-stage algorithms such as CoSaMP and SP (see Appendix D).
All proofs omitted from the main body of the paper can be found in the appendix.
2 Orthogonal Matching PUl"lIuit with Replacement
Orthogonal matching pursuit (OMP), is a classic iterative algorithm for sparse recovery. At every stage, it selecta a
coordinate to include in the current support set by maximizing the inner product between columns of the measurement
matrix A and the current residnal b - Ax'. Doce the new coordinate has been added, it solves a least squares problem
to fully miuimize the error on the current support set As a result, the residnal becomes orthogonal to the columos of
A that correspond to the current support set. Thus, the least squares s1ep is also referred to as orthogonalization by
some authors [5].
Let us briefly explain some of our notation. We use the MATI..AB notation:
A\b:= argmin IIAx - bl1 2
?
z
The hard thresholding operator H.O sorts its argument vector in decreasing order (in absolute value) and retains
ooly the top k entries. It is defined formally in the next sectioo. Also, we use subscripts to denote sub-vectors and
submatrices, e.g. if I <;; Inl is a set of cardinality k and x ERn, XI E R' denotes the sub-vector of X indexed by I.
Similarly, AI for a matrix A E Rmx n denotes a sub-matrix of size m x k with columns indexed by I. The complement
of set I is denoted by I and x I denotes the subvector not indexed by I. The support (indices of non-zero entries) of a
vector x is denoted by supp(x).
Our new algorithm called Orthogooal Matching Pursuit with Replacement (OMPR), shown as Algorithm 1, differs
from OMP in two respects. First, the selection of the coordinate to include is based not just on the magnitude of entries
in AT (b - Ax') but instead on a weighted combination x' + 1/AT (b - Ax') with the s1ep-size 1/ cootrolling the relative
importance of the two addends. Second, the selected coordinate replaces one of the existing elements in the support,
namely the one corresponding to the minimum magnitude entry in the weighted combination mentioned above.
Doce the support IHI of the next iterate has been determined, the actna1 iterate X HI is obtained by solving the least
squares problem:
HI =
X
argmin
IIAx - bli2 .
x: supp(z)=It+l
Note that if the matrix A satisfies RIP of order k or larger, the above problem will be well conditioned and can be
solved quickly and reliably using an iterative least squares solver. We will show that OMPR, uulike OMP, recovers any
k-sparse vector under the RIP based cooditioo 02. :<:; 0.499. This appears to be the least restrictive recovery condition
(i.e., best known coodition) under which any method, be it basis pursuit (ll-minimizatioo) or some iterative algorithm,
is guaranteed to recover all k-sparse vectors.
In the literature on sparse recovery, RIP based cooditioos of a different order other than 2k are often provided. It is
seldom possible to directly compare two conditions, say, one based on 62 ? and the other based on 63 ?? Foucart [10] has
3
given a heuristic to compare such RIP conditions based on the number of samples it takes in the Gaussian ensemble
to satisfy a given RIP condition. This heuristic says that an RIP condition of the form lic' < 9 is less restrictive if the
ratio c/92 is smaller. For the OMPR condition Ii,. < 0.499, this ratio is 2/0.4992 "" 8 which makes it heuristically
the least restrictive RIP condition for sparse recovery. The following summarize our main results on OMPR.
Theorem 1 (Noiseless Case). Suppose the vector x* E IRn is k-sparse and the matrix A satisfies 1i2? < 0.499 and
Ii, < 0.002. Then OMPR converges to an E approximate solution (i.e. 1/211Ax - bl1 2 ~ E) from measurements
b ~ Ax* in O(klog(k/E)) iterations.
Theorem 2 (Noisy Case). Suppose the vector x* E IRn is k-sparse and the matrix A satisfies 1i2 ? < 0.499 and
Ii, < 0.002. Then OMPR converges to a (C,E) approximate solution (i.e. 1/211Ax - bll' ~ ~llell' + E) from
measurements b ~ Ax* + e in O(k log((k + IleI1 2 )/E)) iterations. Here C > 1 is a constant dependent only on 1i2 ?.
The above theorems are special cases of our convergence results for a family of algorithms that contains OMPR as a
special case. We now tum our attention to this family. We note that the condition 1i2 < 0.002 is very mild and will
typically hold for standard random matrix ensembles as soon as the number of rows sampled is larger than a fixed
universal constant
3 A New Family of Iterative Algorithms
In this section we show that OMPR is one particular member of a family of algorithms parameterized by a single
integer 1 E {I, ... , k}. The I-th member of this family, OMPR (I), showo in Algorithm 2, replaces at most 1 elements
of the curreot support with new elements. OMPR corresponds to the choice 1 ~ 1. Hence, OMPR and OMPR (1)
refer to the same algorithm.
Our first result in this section conoects the OMPR family to hard thresholding. Given a set I of cardinality k, define
the partial hard thresholding operator
Hk (z; I, I):~
argmin
(I)
Ily - zll .
hl o:S;k
Isupp(y)\II5:l
As is clear from the definition, the above operator tries to find a vector V close to a given vector z under two constraints:
(i) the vector V should have bounded support (1lvllo ~ k), and (ii) its support should not include more than 1 new
elements outside a given support I.
The name partial hard thresholding operator is justified because of the following reasoning. When 1 ~ k, the constraint
I supp(Y)\I1 ~ 1is trivially implied by IIYllo ~ k and hence the operator becomes independent of!. In fact, itbecomes
identical to the standard hard thresholding operator
H. (z; I, k)
~
H. (z)
:~
argmin Ily - zll .
(2)
11.1109
GJ
Even though the definition of Hk (z) seems to involve searching through
subsets, it can in fact be computed
efficiently by simply sorting the vector z by decreasing absolute value and retaming the top k entries.
The following result shows that even the partial hard thresholding operator is easy to compute. In fact, lines 6-8 in
Algorithm 2 precisely compute H. (zt+1; It, I).
Proposition 3. Let
III ~ k and z be given.
Then Y ~ H. (z;I, I) can be computed using the sequence ofoperations
top ~ indices of top 1 elements oflzll,
J ~ I U top,
V ~ Hk (ZJ) .
The proof of this proposition is straightforward and elementary. However, using it, we can now see that the OMPR (I)
algorithm has a simple conceptoa1 s1ructore. In each iteration (with current iterate x' having support It ~ supp(xt?,
we do the following:
1. (Gradient Descent) Fonn zHI ~ xt - '1AT(Axt - b). Note that AT(Axt - b) is the gradient of the objective
function ~IIAx - bll' at x'.
2. (partial Hard Thresholding) Form VH1 by partially hard thresholding zHI using the operator H. (.; It, I).
3. (Least Squares) Form the next iterate X HI by solving a least squares problem on the support IHI ofyHI.
A nice property enjoyed by the entire OMPR family is guaranteed sparse recovery under RIP based conditions. Note
from below that the condition under which OMPR (I) recovers sparse vectors becomes more restrictive as I increases.
This could be an artifact of our analysis, as in experiments, we do not see any degradation in recovery ability as I is
increased.
4
Theorem 4 (Noiseless Case). Suppose the vector x' E IRn is k-sporse. Then OMPR (I) converges to an < approximation solution (i.e. 1/211Ax - bl1 2 :5 <)from measurements b = Ax* in O( ~ log(k/<? iterations provided we choose a
step size 1'/ that satisfies 1'/(1 + 02.) < 1 and 1'/(1 - 02.) > 1/2.
Theorem S (Noisy Case). Suppose the vector x' E IRn is k-sparse. Then OMPR (I) converges to a (C, <) approximate
solution (i.e., 1/211Ax - bl1 2 :5 IIell 2 + <) from measurements b = Ax' + e in O( log?k + IleI1 2)1<) iterations
provided we choose a step size 1'/ that satisfies 1'/(1 + 02,) < 1 and 1'/(1 - 02.) > 1/2. Here C > 1 is a constant
dependent only on 02., 02 ?.
Proof Here we provide a rough sketch of the proof of Theorem 4; the complete proof is giveo in Appeodix A.
t
t
Our proof uses the following crucial observatioo regarding the structure of the vector zH1 = x' - 1'/AT (Ax' - b) .
Due to the least squares step of the previous iteration, the curreot residual Ax' - b is orthogoual to columns of AI,.
This meaos that
ZH1
- x'It'
It -
z~+1
= -nA'!'
(Ax'
It
" It
- b) .
(3)
As the algorithm proceeds, elemeots come in and move out of the curreot set I,. Let us give names to the set offound
and lost elements as we move from I, to 1'+1:
(found): F, = IH1 \I"
Heoce, using (3) and updates for YH1: Y~;' = Z~;' = -1'/A~,A(x' - x'), and Z~;' = xL. Now let J(x) =
1/211Ax - b11 2, theo using upper RIP and the fact that I supp(yH1 - x')1 = IF, U L,I :5 21, we can sbow that (details
are in the Appeodix A):
J( y H 1) - J(x'):5
C~02' D
IIyWII 2 + 1 ~02'llxUI2.
-
(4)
Furthermore, since yH1 is choseo based on the k largest eotries in z~;:" we have: IIY~;'112 = Ilz~;'112 ~ Ilz~;'112 =
IlxL 112 . Plugging this into (4), we get:
J(yH1) - J(x'):5 (1 +O2 '-~) M;'112.
(5)
Since J(x H1 ) :5 J(yH1) :5 J(x'), the above expression shows that if 1'/ < 1':." then our method moootonically
decreases the objective function and converges to a local optimum even if RIP is not satisfied (note that upper RIP
bound is indepeodeot oflower RIP bound, and can always be satisfied by nurma1izing the matrix appropriately).
However, to prove convergeoce to the global optimum, we need to show that at least ooe new elemeot is added at each
step, i.e., IF,I ~ 1. Furthermore, we need to show sufficieot decrease, i.e, IIY~;'112 ~ elJ(x'). We show both these
conditions for global coovergeoce in Lemma 6, whose proof is giveo in Appeodix A.
Lemma 6. Let 02k < 1 - 2~ and 1/2 < 1'/ < 1. Then assuming J(x') > 0, at least one new element is found i.e.
F, '"
0. Furthermore,
IIY~;'11 > teJ(x'), where e = min(41'/(1 - 1'/),,2(21'/- 1-~"? > 0 is a constant.
Assunling Lemma 6, (5) shows that at each iteration OMPR (I) reduces the objective functioo value by at least a
constant fractioo. Furthermore, if XO is choseo to have eotries bounded by 1, theo J(XO) :5 (1 + 02k)k. Heoce, afier
O(k/llog(k/<? iteratioos, the optimal solution x* would be obtained within < error.
D
Speeial Cases: We have already observed that the OMPR algorithm of the previous sectioo is simply OMPR (1).
Also note that Theorem I immediately follows from Theorem 4.
The algorithm at the other extreme of 1 = k has appeared at least three times in the receot literature: as Iterative (hard)
Thresholding with Inversioo (IT!) in [16], as SVP-Newton (in its matrix avatar) in [15], and as Hard Thresholding
Pursuit (HTP) in [10]). Let us call it IHT-Newton as the least squares step can be viewed as a Newton step for the
quadrstic objective. The above geoera1 result for the OMPR family immediately implies that it recovers sparse vectors
as soon as the measuremeot matrix A satisfies 02, < 1/3.
CoroUary 7. Suppose the vector x' E an is k-sparse and the matrix A satisfies 02k < 1/3. Then IlIT-Newton
recovers x* from measurements b = Ax' in O(1og(k? iterations.
5
4 Tighter Analysis of Two Stage Hard Thresholding Algorithms
Recently, Maleki and Donoho [17] proposed a novel family of algorithms, namely two-stage hard thresholding algorithms. Doring each iteration, these algorithms add a fixed nwnber (say l) of elements to the current iterate's support
set. A least squares problem is solved over the larger support set and then I elements with smallest magnitude are
dropped to form next iterate's support set. Next iterate is then obtained by agaiu solviug the least squares over next
iterate's support set. See Appendix D for a more detailed description of the algorithm.
Usiug proof techniques developed for our proof of Theorem 4, we can obtain a simple proof for the entire spectrum of
algorithms iu the two-stage hard thresholding family.
Theorem 8. Suppose the vector x* E {-I, 0, l}n is k-sparse. Then the 7Wo-stage Hard Thresholding algorithm with
replacement size I recovers x* from measurements b = Ax* in O(k) iterations provided: 6. H1 :::; .35.
Note that CoSaMP [19] and Subspace Pursuit(SP) [4] are popular special cases of the two-stage family. Usiug our
general analysis, we are able to provide significantly less restrictive RIP conditions for recovery.
CoroUary 9. CoSaMP[l9] recovers k-sparse x* E {-1,0, l}n from measurements b = Ax* provided 64k :::; 0.35.
CoroUary 10. Subspace Pursuit[4] recovers k-sparse x* E {-I, 0, I}n from measurements b = Ax* provided
63k :::; 0.35.
Note that CoSaMP's analysis given by [19] requires 64k :::; 0.1 while Subspace Pursuit's analysis given by [4] requires
63k :::; 0.205. See Appendix Diu the supplementary material for proofs of the ahove theorem and coroUaries.
5 Fast Implementation Using Hashing
In this section, we discuss a fast implementation of the OMPR method usiug locality-sensitive hashiug. The
mall iutuition behind our approach is that the OMPR method selects at most one element at each step (given by
argmax, IAT(Ax' - b) I); hence, selection of the top most element is equivalent to finding the column Ai that is most
"similar" (iu magnitude) to r, = Ax' - b, i.e., this may be viewed as the similarity search task for queries of the form
r, and -r, from a database of N vectors IAI"'" ANI.
To this end, we use locality sensitive hashiug (LSH) [12], a well known data-structore for approximate nearestneighbor retrieval. Note that while LSH is designed for nearest neighbor search (iu terms of Euclidean distances) and
iu general might not have any guarantees for the similar neighbor search task, we are still able to apply it to our task
because we can lower-hound the similarity of the most similar neighbor.
We first briefly describe the LSH scheme that we use. LSH generates hash bits for a vector usiug randoruized hash
functions that have the property that the probability of collision between two vectors is proportional to the similarity
between them. For our problem, we use the following hash function: h,.(a) = sign(uT a), where u ~ N(O, J) is a
random hyper-plane generated from the standard multivariate Gaussian distribution. It can be shown that [13]
Pr[h u (al)
= hu (
a.) ] =
af a2 )
1-;;:I cos - I ( Iladlla211'
created by randoruly sampling hash functions h,., i.e., g( a)
Ui is sampled randoruly from the standard multivariate Gaussian
distribution. Next, q hash tables are constructed doring the pre-processiug stage usiug iudependently constructed hash
key functions gl, 92, ... , gq' Doring the query stage, a query is iudexed iuto each hash table usiug hash-key functions
91, 92, ... ,9q and then the nearest neighbors are retrieved by doing an exhaustive search over the indexed elements.
Now,
an
.-bit hash key is
[hu,(a),hu,(a), ... ,hu.(a)], where each
Below we state the following theorem from [12] that guarantees sub-liuear time nearest neighbor retrieval for LSH.
Theorem 11. Let. = O(logn) and q = O(log 1/6)nr1<, then with probability 1 - 6, LSH recovers (I + f)-nearest
neighbors, i.e., Ila' - rl12 :::; (1 + f)lla' - rll?, where a' is the nearest neighbor to r and a' is a point retrieved by
LSH.
However, we cannot directly use the above theorem to guarantee convergence of our hashing based OMPR algorithm
as our algorithm requires finding the most similar poiut iu terms of magnitude of the iuner product. Below, we provide
appropriate settings of the LSH parameters to guarantee sub-liuear time convergence of our method under a slightly
weaker condition on the RIP constant. A detailed proof of the theorem below can be found iu Appendix B.
Theorem 12. Let 62? < 1/4 -")' and 'f/ = I -")" where")' > 0 is a small constant, then with probability I - 6, OMPR
with hashing converges to the optimal solution in O(kmnl /(1+0(I/k)) log k/6) computational steps.
The above theorem shows that the time complexity is sub-liuear iu n. However, currently our guarantees are not
particularly strung as for large k the exponent of n will be close to 1. We believe that the exponent can be improved
by more careful analysis and our empirical results iudicate that LSH does speed up the OMPR method significantly.
6
(a)OMPR
(b)OMP
(c) nIT-Newton
Figure 1: Phase Transition Diagrams for different methods. Red represents high probability of success while blue
represents low probability of success. Clearly, OMPR recovers correct solution for a much larger region of the plot
than OMP and is comparable to nIT-Newton. (Best viewed in color)
6 Experimental Results
In this section we present empirical results to demonstrate accurate and fast recovery by our OMPR method. In the first
set of experiments, we present a phase transition diagram for OMPR and compare it to the phase transition diagrams
of OMP and nIT-Newton with step size 1. For the second set of experiments, we demonstrate robostoess of OMPR
compared to many existiog methods when measurements are noisy or smaller in number than what is required for exact
recovery. For the third set of experiments, we demonstrate efficiency of our LSH based implementation by comparing
recovery error and time required for our method with OMP and nIT-Newtoo (with step-size 1 and 1/2). We do not
present results for the i,ibasis pursuit methods, as it has a1readybeen shown in several recent papers [10, 17] that the
i, relaxation based methods are relatively inefficient for very large scale recovery problems.
In all the experiments we generate the measurement matrix by sampling each entry independently from the standard
normal distribotion N (0, 1) and then normalize each column to have uuit norm. The underlying k-sparse vectors are
generated by randomly selecting a support set of size k and then each entry in the support set is sampled uuiformiy from
{ +1, -I}. We use our own optimized implementation of OMP and nIT-Newtoo. All the methods are implemented in
MATLAB and our hashing routioe uses mex files.
6.1
Phase Transition Diagrams
We first compare different methods using phase transition diagrams which are commouly used in compressed sensing
literatore to compare different methods [17]. We first fix the number of measurements to be m = 400 and generate
different problem sizes by varying p = kim and 6 = min. For each problem size (m, n, k), we generate random
m x n Gaussian measurement matrices and k-sparse random vectors. We then estimate the probability of success of
each of the method by applying the method to 100 randomly generated instances. A method is considered successful
for a particular instance if it recovers the underlying k-sparse vector with at most 1%relative error.
In Figure 1, we show the phase transition diagram of our OMPR method as well as that ofOMP and nIT-Newtoo (with
step size 1). The plots shows probability of successful recovery as a function of p = min and 6 = kim. Figure 1 (a)
shows color coding of different success probabilities; red represents high probability of success while blue represents
low probability of success. Note that for Gaussian measurement matrices, the RIP constant 62 ? is less than a fixed
constant if and ouly ifm = Ck log(nlk), where C is a uuiversal constant This implies that = Clog p and hence a
method that recovers for high 62 ? will have a large fraction in the phase transition diagram wbere successful recovery
probability is high. We observe this phenomenon for both OMPR and nIT-Newton method which is consistent with
their respective theoretical goarantees (see Theorem 4). On the other hand, as expected, the phase transition diagram
of OMP has a negligible fraction of the plot that shows high recovery probability.
*
6_2 Performance for Noisy or Under-sampled Observations
Next, we empirically compare performance of OMPR to various existing compressed sensing methods. As shown
in the phase transition diagrams in Figure 1, OMPR provides comparable recovery to the nIT-Newton method for
noiseless cases. Here, we show that OMPR is fairly robust under the noisy settiog as well as in the case of undersampled observations, where the number of observations is much smaller than what is required for exact recovery.
For this experiment, we generate random Gaussian measurement matrix of size m = 200, n = 3000. We then generate
random binary vector x of sparsity k aod add Gaussian noise to it Figure 2 (a) shows recovery error (1iAx - bll)
incurred by various methods for increasing k and noise level of 10%. Clearly, our method outperforms the existing
methods, perhaps a consequence of goaranteed convergence to a local minimum for fixed step size 1/ = 1. Similarly,
Figure 2 (b) shows recovery error incurred by various methods for fixed k = 50 and varying noise level. Here again,
our method outperforms existiog methods and is more robust to noise. Fina11y, in Figure 2 (c) we show difference in
7
Enurvsk(Noi-=10%)
Error w NaIM k=SO
" _ OMPR
+OMPR(1rI2
:U . IHT-N
~
_ CoSAMP
:ii' + SP
~
3
~'.'~~/
10
20
30
"
Sp.lI'IiIy{k)
(a)
50
0
0.00(0.0)
0.00(0.0)
O.OO(u.O)
0.03(0.0)
U.1"(U.1)
0.31(0.1)
0.37(0.1)
NOise! 1<
"~:----';o,',-'0."
-'0."-'0'.-----,10.'
Hoi_LewI
0.00
0.05
0.0
0.20
U.3U
0.40
0.50
(b)
,0
-0.21(0.6)
0.13(0.3)
0.2"(0.3)
0.62(0.2)
U.92(0.3)
1.19(0.3)
1.48(0.3)
5u
0.25(0.3)
0.37(0.3)
O. 3 0.4)
0.58(0.5)
O.92(O.b)
0.84(0.5)
1.24(0.6)
(c)
Figure 2: Error in recovery <lIAx - bll) of n = 3000 dimensiooal vectors from m = 200 measurements. (a): Error
incurred by various methods as the sparsity level k increases. Note that OMPR incurs the least error as it provably
converges to at least a local minimum forfixed step size 1/ = 1. (b): Error incurred by various methods as the noise
level increases. Here again OMPR performs significaotly better than the existing methods. (c): Differeoce in error
incurred by IHT-Newton aod OMPR . Numbers in bracket dooote confideoce interval at 95% significaoce level.
EmrVII n (mIn=O 001 Idm" 1)
OMPR Huh
+ 00'"-
I:-
0."
+IHT-NMlanC1 ?
,.
.
.".
.,"
....~
0.03
o.
TlII'MI wn (rMFO.oD1, k/m=,1
:.-
,015
lIo
,.,
'"
..,
O.
00 ?
ncIrOOO)
"'"
(b)
(a)
..
n (x1fOOOO)
000
(c)
Figure 3: (a): Error (11Ax - bll) incurred by various methods as k increases. The measuremoots b = Ax are computing
by gooerating x with support size milO. (b),(c): Error incurred aod time required by various methods to recover
vectors of support size 0.1 mas n increases. IlIT-Newton(1/2) refers to the IHT-Newton method with step size 1/ = 1/2.
error incurred along with confideoce interval (at 95% signficaoce level) by IHT-Newton aod OMPR for varying levels
of noises aod k. Our method is better thao !HT-Newton (at 95% signficaoce level) in terms of recovery error in arouod
30 cells of the table, aod is not worse in aoy of the cells but one.
6.3 Performance of LSD based implementation
Next, we empirically study recovery properties of OMPR-Hasb in the following real-time setop: gooerate a raodom
measuremoot matrix from the Gaussiao ensemble aod construct bash tables ollline using hash functioos specified in
Section 5. During the reconstruction stage, measurements arrive one at a time and the goal is to recover the underlying
sigoal accurately in real-time.For our experimoots, we gooerate measuremoots using raodom sparse vectors aod thoo
report recovery error IIAx - bll aod computatiooal time required by each method averaged over 20 runs.
In our first set of experimoots, we eropirically study the performaoce of different methods as k increases. Here, we fix
m = 500, n = 500, 000 aod gooerate measuremoots using n-dimoosional raodom vectors of support set size milO.
We thoo run differeot methods to estimate vectors x of support size k that minimize IIAx - bll. For our OMPR-Hash
method, we use 8 = 20 bits bash-keys aod gooerate q = ..;n bash-tables. Figure 3 (a) shows the error incurred by
OMPR, OMPR-Hash, aod IHT-Newton for differeot k (recall that k is ao input to both OMPR aod IlIT-Newton).
Note that although OMPR-Hash performs ao approximation at each step, it is still able to achieve error similar to
OMPR aod !HT-Newton. Also, note that since the number of measuremoots are not ooough for exact recovery by the
IHT-Newton method, it typically diverges after a few steps. As a result, we use IHT-Newton with step size 1/ = 1/2
which is always goaraoteed to monotonically converge to at least a local minimum (see Theorem 4). In cootrast, in
OMPR aod OMPR-Hasb cao always set step size 1/ aggressively to be 1.
Next, we evaluate OMPR-Hash as dimoosiooality of the data n increases. For OMPR-Hasb, we use 8 = log2(n)
bash-keys aod q = ..;n hash-tables. Figures 3(b) aod (c) compare error incurred aod time required by OMPR-Hash
with OMPR aod IHT-Newton. Here again we use step size 1/ = 1/2 for !HT-Newton as it does not converge for 1/ = 1.
Note that OMPR-Hash is ao order of magnitude faster thao OMPR while incurring slightly higher error. OMPR-Hash
is also nearly 2 times faster thao IHT-Newton.
Acknowledgement
ISD acknowledges support from the Moncrief Graod Challooge Award.
8
References
[I] T. Blumensath and M. E. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational
Harmonic Analysis, 27(3):265-274, 2009.
[2] E. J. Candes. The restricted isometry property and its implications for compressed sensing. Comptes Rendus
Mathematique, 346(9-10):589-592, 2008.
[3] E. J. Candes and T. Tao. Decoding by lioear programming. IEEE Transactions on Information Theory,
51(12):4203-4215,2005.
[4] W. Dai and O. Milenkovic. Subspace pursuit for compressive seosing signal reconstruction. IEEE Transactions
on Information Theory, 55(5):2230--2249, 2009.
[5] M. A. Davenport and M. B. Wakin. Analysis of orthogonal matching pursuit using the restricted isometry
property. IEEE Transactions on Information Theory, 56(9):4395-4401, 2010.
[6] G. Davis, S. Mallat, and M. Avellaneda. Greedy adaptive approximation. Constr. Approx, 13:57--98, 1997.
[7] D. Donoho. Compressed sensing. IEEE Trans. on Information Theory, 52(4):1289-1306, 2006.
[8] D. Donoho, A. Maleki, and A. Montanari. Message passing algorithms for compressed sensing. Proceedings of
the National Academy ofSciences USA, 106(45):18914-18919,2009.
[9] M. F. Duarte, M. A. Davenport, D. Takhar, 1. N. Laska, T. Sun, K. F. Kelly, and R. G. Baranuik. Single-pixel
imaging via compressive sarnpliog. IEEE Signal Processing Magazine, 25(2):83-91, March 2008.
[10] S. Foucart. Hard thresholding pursuit: an algorithm for compressive sensing, 2010. preprint.
[II] S. Foucart. A note on guaranteed sparse recovery via i,-minimi'ation. Applied and Computational Harmonic
Analysis, 29(1):97-103, 2010.
[12] A. Giouis, P. Indyk, and R Motwani. Similarity search in high dimensions using hashing. In Proceedings of
25th International Conference on Very Large Data Bases, 1999.
[13] M. X. Goemans and D. P. WIlliamson.. 879-approximation algorithms for MAX CUT and MAX 2SAT. In STOC,
pages 422-431,1994.
[14] D. Hsu, S. M. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In Advances
in Neural Information Processing Systems, 2009.
[15] P. Jain, R. Meka, and I. S. Dhillon. Gusranteed rank minimiUltion via singular value projection. In Advances in
Neural Information Processing Systems, 2010.
[16] A. Maleki. Convergence analysis of iterative thresholding algorithms. InAllerton Conference on Communication,
Control and Computing, 2009.
[17] A. Maleki and D. Donoho. Optimally tuned iterative reconstruction algorithms for compressed sensing. IEEE
Journal ofSelected Topics in Signal Processing, 4(2):330--341, 2010.
[18] Q. Mo and Y. Shen. Remarks on the restricted isometry property in orthogonal matching pursuit algorithm, 2011.
preprint arXiv: 1101.4458.
[19] D. Needell and J. A. Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Applied
and Computational Harmonic Analysis, 26(3):301- 321, 2009.
[20] S. Negshban, P. Ravikumar, M. 1. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of
M -estimators with decomposable regularizers. In Advances in Neural Information Processing Systems, 2009.
[21] Y. C. Pati, R Rezaiifar, and P. S. Krishnaprasad. Orthogonal matching pursuit: Recursive function approximation
with applications to wavelet decomposition. In 27th Annu. Asilomar Con! Signals, Systems, and Computers,
volume I, pages 40-44, 1993.
[22] H. Rauhut. On the impossibility of uniform sparse reconstruction using greedy methods. Sampling Theory in
Signal and Image Processing, 7(2):197-215, 2008.
[23] S. Shalev-Shwartz, N. Srebro, and T. Zhang. Trading accuracy for sparsity in optimiUltion problems with sparsity
constraints. SIAM Journal on Optimization, 20:2807-2832, 2010.
[24] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. S. Huang, and S. Yan. Sparse representation for computer vision and
pattern recognition. Proceedings ofthe IEEE, 98(6):1031-1044, 2010.
[25] T. Zhang. Adaptive forward-backward greedy algorithm for sparse learning with lioear models. In Advances in
Neural Information Processing Systems, 2008.
[26] T. Zhang. Sparse recovery with orthogonal matching pursuit under RIP, 2010. preprint arXiv:lOO5.2249.
9
| 4462 |@word mild:1 milenkovic:1 briefly:2 inversion:1 seems:1 norm:1 heuristically:1 hu:4 llo:1 decomposition:1 fonn:1 incurs:1 initial:1 contains:1 selecting:1 tuned:1 ours:1 o2:1 existing:6 outperforms:2 current:7 com:1 recovered:2 comparing:1 ilxl:1 zll:2 remove:2 designed:1 plot:3 update:1 ouly:2 hash:18 greedy:5 selected:1 accordingly:1 plane:1 moncrief:1 provides:2 ih1:1 zhang:5 along:1 constructed:2 prove:4 blumensath:1 expected:1 indeed:1 multi:1 ifm:1 decreasing:2 zhi:4 curse:1 cardinality:2 solver:1 becomes:3 provided:9 increasing:1 notation:2 suffice:1 bounded:2 underlying:3 prateek:1 what:2 kind:1 argmin:4 developed:1 compressive:4 unified:3 astronomy:1 finding:3 guarantee:15 sapiro:1 every:1 finance:1 shed:1 exactly:1 axt:2 control:1 positive:1 negligible:1 ilxll:1 local:5 diu:1 dropped:1 consequence:1 lsd:1 subscript:1 foba:1 might:1 studied:2 nearestneighbor:1 co:1 klog:1 averaged:1 ihi:4 practice:2 lost:1 recursive:1 differs:2 area:1 lla:1 universal:1 empirical:2 submatrices:1 significantly:3 thought:1 projection:1 matching:12 alleviated:1 pre:1 ri2:1 refers:1 lic:1 suggest:1 ooe:2 get:3 cannot:2 close:2 selection:2 operator:9 ila:1 applying:1 equivalent:1 yt:2 maximizing:1 rll:1 straightforward:1 attention:1 nit:8 independently:1 focused:1 shen:1 decomposable:1 recovery:40 immediately:2 needell:1 estimator:1 classic:4 searching:1 coordinate:6 updated:1 target:1 suppose:6 avatar:1 rip:26 exact:3 programming:1 mallat:1 us:2 designing:1 magazine:1 element:18 expensive:1 particularly:1 recognition:1 cut:1 database:1 observed:2 preprint:3 solved:3 region:1 sun:1 decrease:2 noi:1 ilit:3 mentioned:1 ui:1 complexity:1 solving:3 efficiency:1 basis:1 easily:1 liuear:3 routinely:1 tx:2 various:9 tej:1 jain:2 fast:4 describe:1 query:3 hyper:1 shalev:2 outside:1 exhaustive:1 whose:2 heuristic:2 larger:4 solve:2 supplementary:1 say:3 compressed:12 ability:1 statistic:1 noisy:8 itself:1 indyk:1 advantage:1 sequence:1 iiax:5 propose:1 reconstruction:4 product:2 gq:1 cao:1 achieve:1 academy:1 description:1 normalize:1 convergence:5 motwani:1 cosamp:12 optimum:3 diverges:1 converges:7 object:1 oo:1 nearest:5 ij:1 solves:1 implemented:1 c:2 come:1 implies:2 trading:1 correct:1 material:1 iat:1 mathematique:1 fix:2 ao:3 proposition:2 tighter:1 elementary:1 hold:2 considered:1 wright:1 normal:1 mo:1 rezaiifar:1 major:1 early:1 smallest:1 omitted:1 a2:1 label:1 currently:2 utexas:2 sensitive:6 largest:1 weighted:2 minimization:4 hope:1 rough:1 htp:4 clearly:2 always:5 gaussian:7 ck:1 og:1 varying:3 ax:28 focus:1 ily:2 rank:1 hk:4 contrast:1 impossibility:1 greedily:1 kim:2 duarte:1 dependent:2 inaccurate:1 typically:4 bt:1 entire:2 irn:4 nantes:1 selects:1 i1:1 provably:5 pixel:1 iu:7 krishnaprasad:1 flexible:1 tao:2 denoted:2 logn:1 exponent:2 davy:1 special:3 initialize:2 fairly:1 laska:1 construct:1 having:1 enlarge:2 sampling:3 biology:1 identical:1 represents:4 yu:1 nearly:1 secood:1 others:1 report:1 functioo:1 bangalore:1 xhi:1 few:1 randomly:3 bl1:4 national:1 argmax:1 phase:9 replacement:7 microsoft:2 ab:1 huge:2 interest:1 message:1 analyzed:1 extreme:2 bracket:1 light:1 behind:1 devoted:1 regularizers:1 iiy:3 implication:1 accurate:1 ambient:1 nowadays:1 partial:6 respective:1 orthogonal:10 indexed:4 incomplete:1 euclidean:1 desired:1 isolated:1 theoretical:3 increased:1 column:5 soft:1 instance:2 retains:1 entry:7 subset:1 uniform:1 successful:3 too:1 optimally:1 chooses:2 st:1 international:1 siam:1 clog:1 decoding:1 quickly:1 na:1 again:4 satisfied:3 choose:2 huang:1 davenport:2 worse:1 inefficient:1 li:1 supp:10 coding:1 includes:1 satisfy:3 notable:1 try:1 h1:2 doing:1 red:2 recover:9 sort:1 candes:3 contribution:1 minimize:1 square:13 accuracy:1 efficiently:3 ensemble:6 yield:1 identify:1 correspond:1 ofthe:1 weak:1 accurately:1 rauhut:1 explain:1 iht:12 definition:2 addend:1 proof:16 associated:1 recovers:13 mi:1 con:1 sampled:4 hsu:1 proved:1 popular:2 recall:2 knowledge:1 ut:1 dimensionality:4 color:2 back:1 appears:1 ok:1 tum:1 hashing:9 higher:1 improved:1 iai:1 though:1 furthermore:5 just:2 stage:18 correlation:1 langford:1 hand:2 sketch:1 tropp:1 lack:1 artifact:1 perhaps:1 believe:1 name:2 usa:1 true:2 maleki:4 hence:7 aggressively:1 dhillon:2 i2:4 deal:1 ll:1 thao:3 during:1 davis:1 complete:1 demonstrate:3 performs:2 interrelationship:1 orthogonalization:1 reasoning:1 image:1 harmonic:3 novel:5 recently:2 bash:4 empirically:2 volume:1 million:2 extend:1 ompr:77 measurement:28 refer:1 ai:3 meka:1 tuning:1 seldom:1 trivially:1 enjoyed:1 similarly:2 approx:1 lsh:12 similarity:4 gj:1 add:5 base:1 multivariate:2 isometry:5 recent:2 showed:1 retrieved:2 own:1 binary:1 success:6 minimum:4 dai:1 omp:18 converge:3 monotonically:1 signal:10 ii:6 branch:1 multiple:1 reduces:1 faster:4 af:1 huh:1 long:1 retrieval:2 ravikumar:1 award:1 plugging:1 prediction:1 naim:1 vision:2 noiseless:4 arxiv:2 offixed:1 iteration:14 mex:1 cell:2 justified:1 want:1 interval:2 diagram:9 singular:1 crucial:1 appropriately:1 unlike:3 file:1 member:2 call:3 integer:2 structural:1 leverage:1 iii:1 enough:1 easy:2 wn:1 iterate:8 reduce:2 inner:1 regarding:1 texas:2 expression:1 wo:1 passing:1 remark:1 matlab:1 useful:1 collision:1 clear:1 involve:1 detailed:2 amount:1 ilz:2 reduced:1 generate:5 zj:2 sign:1 blue:2 diverse:1 milo:2 iz:1 key:6 ani:1 ht:3 backward:1 isd:1 imaging:1 relaxation:1 fraction:2 run:5 parameterized:2 arrive:1 family:19 almost:2 reader:1 decide:1 appendix:5 comparable:2 bit:3 bound:2 hi:5 guaranteed:4 replaces:3 sparked:1 constraint:3 precisely:1 yan:1 l9:1 generates:1 speed:2 argument:1 min:5 relatively:1 ern:1 combination:2 march:1 smaller:3 slightly:2 kakade:1 constr:1 making:1 modification:1 hl:2 restricted:6 pr:1 xo:2 asilomar:1 discus:1 rendus:1 argmaxj:1 prajain:1 end:6 pursuit:20 tightest:1 incurring:1 permit:1 apply:1 svp:1 observe:1 appropriate:1 hii:1 top:7 denotes:3 include:4 log2:1 wakin:1 newton:22 zh1:2 restrictive:9 classical:1 implied:1 objective:4 move:2 added:3 already:1 said:1 gradient:2 lends:1 subspace:6 distance:1 lio:1 topic:1 idm:1 assuming:1 index:3 pati:1 providing:1 ratio:2 wbere:1 takhar:1 stoc:1 tightening:1 design:1 reliably:1 zt:2 implementation:5 perform:1 upper:2 observation:3 modem:1 datasets:1 iti:1 descent:1 communication:1 bll:7 nwnber:1 structore:3 complement:1 fort:1 minimizatioo:2 required:8 subvector:1 connection:1 namely:2 optimized:1 specified:1 trans:1 avellaneda:1 able:6 proceeds:1 usually:1 below:5 pattern:1 appeared:1 sparsity:8 summarize:1 program:1 ambuj:1 including:1 max:2 mall:1 wainwright:1 ation:1 natural:1 solvable:1 undersampled:1 residual:1 scheme:1 created:1 acknowledges:1 prior:1 literature:3 rmx:1 nice:1 acknowledgement:1 kelly:1 relative:2 fully:1 proportional:1 srebro:1 incurred:10 consistent:1 thresholding:28 austin:4 row:1 summary:1 surprisingly:1 gl:1 soon:2 theo:2 enjoys:1 weaker:2 india:2 neighbor:7 face:1 absolute:2 sparse:43 dimension:3 xn:1 default:1 transition:9 author:1 made:1 adaptive:2 forward:1 transaction:3 approximate:4 global:2 active:1 sat:1 mairal:1 quoted:1 shwartz:2 xi:2 spectrum:3 search:5 iterative:21 table:6 robust:3 b11:1 williamson:1 sp:9 main:2 montanari:1 noise:7 body:1 referred:1 sub:9 uoiformly:1 xl:5 pul:1 third:1 wavelet:1 theorem:20 annu:1 xt:2 jt:2 sensing:13 foucart:3 adding:1 importance:1 aod:19 magnitude:6 budget:1 conditioned:1 sorting:1 yhi:1 locality:6 ilx:1 simply:2 subfamily:1 elj:1 inl:1 inderjit:2 partially:1 corresponds:1 nlk:1 satisfies:9 ma:2 goal:3 formulated:1 viewed:3 donoho:4 careful:2 hard:25 change:3 determined:1 usiug:6 llog:1 degradation:1 lemma:3 called:2 comptes:1 goemans:1 experimental:2 mati:1 formally:1 support:37 evaluate:1 phenomenon:1 |
3,825 | 4,463 | Priors over Recurrent Continuous Time Processes
Ardavan Saeedi
Alexandre Bouchard-C?ot?e
Department of Statistics
University of British Columbia
Abstract
We introduce the Gamma-Exponential Process (GEP), a prior over a large family of continuous time stochastic processes. A hierarchical version of this prior
(HGEP; the Hierarchical GEP) yields a useful model for analyzing complex time
series. Models based on HGEPs display many attractive properties: conjugacy,
exchangeability and closed-form predictive distribution for the waiting times, and
exact Gibbs updates for the time scale parameters. After establishing these properties, we show how posterior inference can be carried efficiently using Particle
MCMC methods [1]. This yields a MCMC algorithm that can resample entire sequences atomically while avoiding the complications of introducing slice and stick
auxiliary variables of the beam sampler [2]. We applied our model to the problem
of estimating the disease progression in multiple sclerosis [3], and to RNA evolutionary modeling [4]. In both domains, we found that our model outperformed
the standard rate matrix estimation approach.
1
Introduction
The application of non-parametric Bayesian techniques to time series has been an active field in the
recent years, and has led to many successful continuous time models. Examples include Dependent Dirichlet Processes (DDP) [5], Ornstein-Uhlenbeck Dirichlet Processes [6], and stick-breaking
autoregressive processes [7]. One property of these models is that they are forgetful, meaning that
the effect of an observation at time t on a prediction at time t + s will decrease as s ? ?. More
formally, DDPs and their cousins can be viewed as priors over transient processes (see Section A of
the Supplementary Material).
In some situations, emphasizing the short term trends is desirable, for example for the analysis of
financial time series. However, in other situations, this behavior does not use the data optimally.
As a concrete example of the type of time series we are interested in, consider the problem of
modeling the progression of recurrent diseases such as multiple sclerosis. Recurrent diseases are
characterized by alternations between relapse and remission periods, and patients can undergo this
cycle repeatedly. In multiple sclerosis research, measuring the effect of drugs in the presence of
these complex cycles is challenging, and is one of the applications that motivated this work.
The data available to infer the disease progression typically takes the form of summary measurements taken at different points in time for each patient. We model these measurements as being
conditionally independent given a continuous time non-parametric latent process. The main options
available for this type of situation are currently limited to parametric Bayesian models [8], or to
non-Bayesian models [9].
In this work, we propose a family of models, Gamma-Exponential Processes (GEPs), that fills this
gap. GEPs are based on priors over recurrent, infinite rate matrices specifying a jump process in a
latent space.
It is informative to start by a preview of what the predictive distributions look like in GEP models.
Indeed, an advantage of GEPs is that they have simple predictive distributions, a situation remi1
niscent of the theory of Dirichlet Processes, in which the simple predictive distributions (given by
the Chinese Restaurant Process (CRP)) were probably an important factor behind their widespread
adoption in Bayesian non-parametric statistics.
Suppose that the hidden state at the current time step is ?, and that we are interested in the distribution
over the waiting time t before the next jump to a different hidden state (we will come back to the
predictive distribution over what this next state is in Section 3, showing that it has the form of a
CRP). Let t1 , t2 , . . . , tn denote the previous, distinct waiting times at ?. The predictive distribution
is then specified by the following density over the positive reals:
f (t) =
(?0 + n)(?0 + T )(?0 +n)
,
(?0 + T + t)?0 +n+1
where T is the sum over the ti ?s, and ?0 , ?0 are parameters. It can be checked that this yields an
exchangeable distribution over the sequences of waiting times at ? (if forms a telescoping product?
see the proof of Proposition 5 in the Supplementary Material). By de Finetti?s theorem, there is
therefore a mixing prior distribution. We identify this prior in Section 3, and use it to build a
powerful hierarchical model in Section 4. As we will see, this hierarchical model displays many
attractive properties: conjugacy, exchangeability and closed-form predictive distributions for the
waiting times, and exact Gibbs updates for the time scale parameters. Moreover it admits efficient
inference algorithms, described in Section 5.
In addition to the connection to DDPs mentioned above, our models are also related to the infinite Hidden Markov Model (iHMM) [10] and to the more general Sticky-HDP-HMM [11], which
are both based on priors over discrete time processes. While continuous-time analogues of these
discrete time processes can be constructed by subordination, we discuss in Section C of the Supplementary Material the differences and advantages of GEPs compared to these subordinations. A
similar argument holds for factorial extensions of the infinite HMM [12].
Gamma (Moran) Processes [13], a building block for our process, have been used in non-parametric
Bayesian statistics, but in different contexts, for example in survival analysis [14], spatial statistics
[15], and for modeling count data [16].1 Note also that the gamma-exponential process introduced
here is unrelated to the exponential-gamma process [18].
2
Background and notation
While our process can be defined on continuous state spaces, the essential ideas can be described
over countable state spaces. We therefore focus in this section on reviewing Continuous Time
Markov Processes (CTMPs) over a countably infinite state space.
These CTMPs can be characterized by an infinite matrix qi,j where the off-diagonal entries are nonnegative and each row sums to zero (i.e. the diagonal entries are negative and with magnitude equal
to the sum of the off-diagonal row entries). Samples from these processes take the form of a list
of pairs of states and waiting times X = (?n , Jn )N
n=1 (see Figure 1(a)). We will call each pair of
that form a (hidden) event. Typically, only a function Y of the events is available. For example,
measurements could be taken at fixed or random time intervals. We will come back to the partially
observed sequences setup in Section 5.
To simulate a sequence of events given parameters Q = (qi,j ), we use the standard Doob-Gillespie
algorithm: conditioning on the current state having index i, ?N = i, the waiting time before the next
jump is exponentially distributed JN +1 |(?N = i) ? Exp(?qi,i ), and the index j 6= i of the next
state ?N +1 is selected independently with probability proportional to p(j) = qi,j 1[i 6= j].
The goal of this work is to develop priors on such infinite rate matrices that are both flexible and easy
to work with. To do that, we first note that the off-diagonal elements of each row i can be viewed
as a positive measure ?i . Note that the normalization of this measure in not equal to one in general.
We will denote the normalization constant of measures by ||?|| and the normalized measures by
?
? = ?/||?||.
1
The terminology ?Moran Gamma Process? is from Kingman (e.g. in [17]). It is the same process as the
Gamma process used in e.g. [15], except that we have one more degree of freedom in the parameterization (the
rate; this is because ours is not destructively normalized).
2
?
J =0 ?
?
J
?
X
?
J
?
Y
J
...
t
t?
t=0
Y(t )
Y(t )
...
Y(t
) Y(t
)
Y(t )
(a)
(b)
Figure 1: (a) An illustration of our notation for samples from CTMPs. We assume the state space (?) is
countable. The notation for the observations Y (t1 ), . . . , Y (tG ) is described in Section 5. (b) Graphical model
for the hierarchical model of Section 4. For simplicity we only show a single J and ?.
To get a conjugate family, we will base our priors on Moran Gamma Processes (MGPs) [13], a
family of measure-valued probability distributions. MGPs have three parameters: (1) A positive
real number ?0 > 0, called the concentration or shape parameter, (2) A probability distribution
P0 : F? ? [0, 1] called the base probability distribution, (3) A positive real number ?0 > 0, called
the rate parameter. Alternatively, the first two parameters can be grouped into a single finite base
measure parameter H0 = ?0 P0 .
Recall that by the Kolmogorov consistency theorem, in order to guarantee the existence of a stochastic process on a probability space (?0 , F?0 ), it is enough to provide a consistent definition of what
the marginals of this stochastic process are. As the name suggest, in the case of a Moran Gamma
process, the marginals are gamma distributions:
Definition 1 (Moran Gamma Process). Let H0 , ?0 be of the types listed above. We say that ? :
F?0 ? (F? ? [0, ?)) is distributed according to the Moran Gamma process distribution, denoted
by ? ? MGP(H0 , ?0 ), if for all measurable partitions of ?, (A1 , . . . , AK ), we have: 2
(?(A1 ), ?(A2 ), . . . , ?(AK )) ? Gamma(H0 (A1 ), ?0 ) ? ? ? ? ? Gamma(H0 (Ak ), ?0 ).
3
Gamma-Exponential Process
We can now describe the basic version of our model, the Gamma-Exponential Process (GEP). In the
next section, we will move to a hierarchical version of this model.
In GEPs, the rows of a rate matrix Q are obtained by a transformation of iid samples from an MGP,
and the states are then generated from Q with the Doob-Gillespie algorithm described in the previous
section. In this section we show that this model is conjugate and has a closed form expression for
the predictive distribution.
Let H0 be a base measure on a countable support ? with kH0 k < ?. We will relax the countable
base measure support assumption in the next section. The GEP is formally defined as follows:
iid
?? ? MGP(H0 , ?0 ) ?? ? ?
?N +1 X, {?? }??? ? ?
? ?N
JN +1 X, {?? }??? ? Exp (k??N k)
To understand the connection with the Doob-Gillespie process, note that a rate matrix can be obtained by arbitrarily ordering ? = ?(1) , ?(2) , . . . , and setting:3 qi,j = ??(i) ({?(j) }) if i 6= j, and
2
We use the rate parameterization for the gamma density throughout.
Note that the GEP as defined above can generate self-transitions, but conditioning on the parameters, the
jump waiting times are still exponential. However for computing predictive distributions, it will be simpler to
allow positive self-transitions rates.
3
3
k??(i) k (?
??(i) ({i}) ? 1) otherwise. In order to model the initial distribution without cluttering the
notation, we assume there is a special state ?beg always present at the beginning of the sequence, and
only at the beginning. In other words, we always condition on (?0 = ?beg ) and (?n 6= ?beg , n > 0),
and drop these conditioning events from the notation. Similarly, we are going to consider distribution over infinite sequences in the notation that follows, but if the goal is to model finite sequences,
an additional special state ?end 6= ?beg can be introduced. We would then condition on (?N +1 = ?end )
and (?n 6= ?end , n ? {1, . . . , N }), and set the total rate for the row corresponding to ?end to zero.
Next, we show that the posterior of each row, ?? |X, is also MGP distributed with updated parameters. We assume that all the states are observed for now, and treat the partially observed case in
Section 5.
The sufficient statistics for the parameters of ?? |X are the empirical transition measures and waiting
times:
F? =
N
X
1[?n?1 = ?] ??n ,
T? =
n=1
N
X
1[?n?1 = ?] Jn .
n=1
Proposition 2. The Gamma-Exponential Process (GEP) is a conjugate family, ?? |X
MGP (?0? , ??0 ) , where ?0? = F? + H0 and ??0 = T? + ?0 .
?
Note that the ?0? are unnormalized versions of the posterior parameters of a Dirichlet process. This
connexion with the Dirichlet process is used in the proof below, and also implies that samples from
GEPs have countable support even when ? is uncountable (i.e. the chain will always visit a random
countable subset of ?). For the proof of proposition 2, we will need the following elementary
lemma:
Lemma 3. If V ? Beta(a, b) and W ? Gamma(a + b, c) are independent, then V W ?
Gamma(a, c).
See for example [19] for a survey of standard beta-gamma algebra results such as the one stated in
this lemma. We now prove the proposition:
Proof. Fix an arbitrary state ? and drop the index for simplicity (this is without loss of generality
since the rows are iid): let ? = ?? , ?0 = ?0? , and ? 0 = ??0 .
Let (A1 , . . . , AK ) be a measurable partition of ?. By the Kolmogorov consistency theorem, it is
enough to show that for all such partition,
(?(A1 ), ?(A2 ), . . . , ?(AK ))|X ? Gamma(?0 (A1 ), ? 0 ) ? ? ? ? ? Gamma(?0 (Ak ), ? 0 ).
Assume for simplicity that K = 2 (the argument can be generalized to K > 2 without difficulties),
and let ?1 = ?(A1 ), ?0 = k?k. By elementary properties of Gamma distributed vectors, if we let
V = ?1 /?0 , W = ?0 , then V ? Beta(H0 (A1 ), H0 (A2 )), W ? Gamma(?0 , ?0 ), and V, W are independent (both conditionally on X and unconditionally). By beta-multinomial conjugacy, we also
have (V |X) = (V |?1 , . . . , ?N ) ? Beta(?0 (A1 ), ?0 (A2 )), and by gamma-exponential conjugacy,
we have W |X ? Gamma(k?0 k, ? 0 ).
Using the lemma with a = ?0 (A1 ), b = ?0 (A2 ), c = ? 0 , we finally get that (?(A1 )|X) =
(V W |X) ? Gamma(?0 (A1 ), ? 0 ), which concludes the proof.
We now turn to the task of finding an expression for the predictive distribution, (?N +1 , JN +1 )|X.
We will need the following family of densities (see Section F for more information):
Definition 4 (Translated Pareto). Let ? > 0, ? > 0. We say that a random variable T is translatedPareto, denoted T ? TP(?, ?), if it has density:
f (t) =
1[t > 0]?? ?
.
(t + ?)?+1
(1)
Proposition 5. The predictive distribution of the GEP is given by:
(?N +1 , JN +1 )|X ? ?
?0?N ? TP(k?0?N k, ??0 N ).
4
(2)
Proof. By Proposition 2, it is enough to show that if ? ? MGP(H0 , ?0 ), ?|? ? ?
?, and J|? ?
Exp(k?k), then (?, J) ? ?
? ? TP(?0 , ?0 ), where ?0 = kH0 k.
d
Note first that we have (J|?) = J by the fact that the minimum and argmin of independent exponential random variables are independent. To get the distribution of J, we need to show that the
following integral is proportional to Equation (1):
Z
p(t) ?
x?0 ?1 exp(??0 x) ? x exp(?xt) dx
x>0
Z
=
x?0 exp (?(?0 + t)x) dx =
x>0
?(?0 + 1)
(?0 + t)?0 +1
Hence J ? TP(?0 , ?0 ).
As a sanity check, and to connect this result with the discussion in the introduction, it is instructive
to directly check that these predictive distributions are indeed exchangeable (see Section B for the
proof):
Proposition 6. Let Jj(?,1) , Jj(?,2) , . . . , Jj(?,K) be the subsequence of waiting times following state
?. Then the random variables Jj(?,1) , Jj(?,2) , . . . , Jj(?,K) are exchangeable. Moreover, the joint
density of a sequence of waiting times (Jj(?,1) = j1 , Jj(?,2) = j2 , . . . , Jj(?,K) = jK ) is given by:
p(j1 , j2 , . . . , jK ) =
1[jk > 0, k ? {1, . . . , K}](?0 )K ?0?0
(?0 + j1 + ? ? ? + jK )?0 +K
(3)
where the Pochhammer symbol (x)n is defined as (x)n = x(x + 1) ? ? ? (x + n ? 1).
4
Hierarchical GEP
In this section, we present a hierarchical version of the GEP, where the rows of the random rate
matrix are exchangeable rather than iid. Informally, the motivation behind this construction is to
have the rows share information on what states are frequently visited.
As with Hierarchical Dirichlet Processes (HDPs) [20], the hierarchical construction is especially
important when ? is uncountable. For such spaces, since each GEP sample has a random countable
support, any two independent GEP samples will have disjoint supports with probability one. Therefore, GEP alone cannot be used to construct recurrent processes when ? is uncountable. Fortunately,
the hierarchical model introduced in this section addresses this issue: it yields a recurrent prior over
continuous time jump processes over both countable and uncountable spaces ? (see Section A).
The hierarchical process is constructed by making the base measure parameter of the rows shared
and random. Formally, the model has the following form:
?0
?? |?0
? MGP(H0 , ?0 )
iid
? MGP(?0 , ?0 )
?N +1 X, {?? }???
JN +1 X, {?? }???
??
??N
? Exp(k??N k).
In order to get a tractable predictive distribution, we introduce a set of auxiliary variables. These
auxiliary variables can be compared to the variables used in the Chinese Restaurant Franchise (CRF)
metaphor [20] to indicate when new tables are created in a given restaurant. In the HGEP, a restaurant can be understood as a row in the rate matrix, and tables, as groups of transitions to the same
destination state. These auxiliary variables will be denoted by An , where the event An = 1 means
informally that the n-th transition creates a new table. The variable takes value An = 0 otherwise.
See Section D in the Supplementary Material for a review of the CRF construction and a formal
definition of the auxiliary variables An .
We augment the sufficient statistics with empirical counts for the number of tables across all restauPN
rants that share a given dish, G = n=1 An ??n , and introduce one additional auxiliary variable, the
normalization of the top level random measure, k?0 k. This latter auxiliary variable has no equivalent
in CRFs. As in the previous section, the normalization of the lower level random measures k?? k
will be marginalized. Finally, we let:
0(H)
?00 = G + H0
??
0(H)
= F? + k?0 k?
?00
where ?
?? can be recognized as the mean parameter of the predictive distribution of the HDP.
We use the superscript (H) to disambiguate from the non-hierarchical case. The main result of this
section is (see Section B for the proof):
5
12
10
25
4
6
State
8
20
15
State
10
2
5
0
0
200
400
600
800
0
200
400
600
800
Time
(b) ?0 = 100, kH0 k = 5, ?0 = 10
100
State
50
200
0
0
100
State
300
150
400
Time
(a) ?0 = 10, kH0 k = 5, ?0 = 100
0
200
400
600
800
0
200
400
600
800
Time
Time
(c) ?0 = 10, kH0 k = 500, ?0 = 10
(d) ?0 = 1, kH0 k = 5000, ?0 = 1
Figure 2: Qualitative behavior of the prior
Proposition 7. The predictive distribution of the Hierarchical GEP (HGEP) is given by:
0(H)
0(H)
(?N +1 , JN +1 )(X, {An }N
??N ? TP(k??N k, ??0 N ).
n=1 , k?0 k) ? ?
To resample the auxiliary variable k?0 k, a gamma-distributed Gibbs kernel can be used (see Section E of the Supplementary Material).
5
Inference on partially observed sequences
In this section, we describe how to approximate expectations under the posterior distribution of
GEPs, E[h(X)|Y], for a test function h on the hidden events X given observations Y. An example
of function h on these events is to interpolate the progression of the disease in a patient with Multiple
Sclerosis (MS) between two medical visits. We start by describing the form of the observations Y.
Note that in most applications, the sequence of states is not directly nor fully observed. First, instead
of observing the random variables ?, inference is often carried from X -valued random variables Yn
distributed according to a parametric family P indexed by the states ? of the chain, P = {L? :
FX ? [0, 1], ? ? ?}. Second, the measurements are generally available only for a finite set of
times T . To specify the random
in question,
we will need a notation for the event index at
n variables
o
PN +1
a given time t, I(t) = min N : n=1 Jn > t (see Figure 1, where I(t? ) = N ? 1), and for the
individual observations, Y (t)|X ? L?I(t) . The set of all observed random variable is then defined
as Y = (Y (t1 ), Y (t2 ), . . . , Y (tG ) : tg < tg+1 , {ti } = T ) .
For simplicity, we assume in this section that P is a conjugate family with respect to H0 . Nonconjugate models can be handled by incorporating the auxiliary variables of Algorithm 8 in [21].
We will describe inference on the model of Section 3. Extension to hierarchical models is direct (by
keeping track of an additional sufficient statistic G, as well as the auxiliary variables An , k?0 k).
In general, there may be several exchangeable sequences from which we want to learn a model. For
example, we learned a model for MS disease progression by using time series from several patients.4
We denote the number of time series by K, each of the form
(k)
(k)
(k)
(k)
(k)
Y (k) = Y (k) (t1 ), Y (k) (t2 ), . . . , Y (k) (tG ) : t(k)
< tg+1 , {ti } = T (k) , k ? {1, . . . , K}.
g
At a high-level, our inference algorithm works by resampling the hidden events X (k) for one se(\k)
(\k)
quence k given the sufficient statistics of the other sequences, (F? , T? ). This is done using a
Sequential Monte Carlo (SMC) algorithm to construct a proposal over sequences of hidden events.
Each particle in our SMC algorithm is a sequence of states and waiting times for the current sequence k. By using a Particle MCMC (PMCMC) method [1], we then compute an acceptance ratio
4
Even in cases where there is a single long sequence, we recommend for efficiency reasons to partition the
sequence into subsequences. In this case our proposal can be viewed as a block update.
6
Datasets
Name
Synthetic
MS
RNA
Results (mean error)
# sequences
# datapoints
# heldout
# characters
Baseline
EM
HGEP
1000
72
1000
10000
384
6167
878
31
508
4
3
4
0.703
0.516
0.648
0.404
0.355
0.596
0.446
0.277
0.426
Table 1: Summary statistics and mean error results for the experiments. All experiments were repeated 5
times.
that makes this proposal a valid MCMC move. As we will see shortly, the acceptance is simply given
by a ratio of marginal likelihood estimators, which can be computed directly from the unnormalized
particle weights.
Formally, the proposal is based on M particles propagated from generation g = 0 up to generation
G, where G is equal to the number of measurements in the current sequence, G = |Y (k) |. Each
particle Xm,g , m ? {1, . . . , M } consists of a list of hidden events indexed by n, containing both
Nm,g
(hidden) states and waiting times: Xm,g = (?m,n , Jm,n )n=1
. The pseudocode for the SMC algorithm, used for constructing the proposals, is presented in Figure 4 of the Supplementary Material.
(k)
The next step is to compute an acceptance probability for a proposed sequence of states X? .
At each MCMC iteration, we assume that we store the value of the data likelihood estimates for
the accepted state sequences. These data likelihood estimates are computed from the unnormalized
QG
weights ?g (described in Figure 4 of the Supplementary Material) as follows: L(k) = g=1 k?g k.
Let L(k) be the estimate for the previously accepted sequence of states for observed sequence k, and
(k)
let L? be the estimate for
MCMC iteration. The acceptance probability for the new
n the current o
(k)
(k)
(k)
sequence is given by min 1, L? /L
. If it is accepted, we set L(k) = L? .
6
Experiments
In this section we present the results of our experiments. First, we demonstrate the behavior of state
trajectories and sojourn times sampled from the prior to give a qualitative idea of the range of time
series that can be captured by our model. Second, we evaluate quantitatively our model by applying it to three held-out tasks: synthetic, Multiple Sclerosis (MS) patients, and RNA evolutionary
datasets.
6.1
Qualitative behavior of the prior
We can distinguish at least four types of prior behaviors in the HGEP when considering different
values for the parameters ?0 , kH0 k and ?0 . We sampled a sequence of length T = 800 and present
the state-time plots. Figure 2(a) shows a sequence with short sojourn times and high volatility of
states, whereas Figure 2(b) depicts longer sojourn times with much less volatility. Figures 2(c) and
2(d) illustrate the effect of hyperparameter kH0 k. In Figure 2(c) we can see creation of many new
states and a sparse transition matrix. Likewise, in Figure 2(d) the high tendency to create new states
is present, but we have longer sojourn times. See Section H of the supplementary material for a
more detailed account of the interpretation and quantitative effect of the parameters.
6.2
Quantitative evaluation
In this section, we use a simple likelihood model for discrete observations (described in Section G
of the supplementary material) to evaluate our method on three held-out tasks. Note that even when
the observations are discrete, non-parametric models are still useful for better explaining the data
using latent variables [22].
We considered three evaluation datasets obtained by holding out each observed datapoint with a
10% probability (see Table 1). We then reconstructed the observations at these held-out times, and
measured the mean error. For HGEP, reconstruction was done by using the Bayes estimator approximated from 1000 posterior samples (one after each scan through all the time series). We repeated
7
RNA
?
0.7
?
??????????????????????????????????????????????????????????????????????????????????????????????????
200 400 600 800
0.40
0.5
0.4
?
0
?
0.35
0.50
0.6
HGEP
EM
???????????????????????????????????????????????????????????????????????????????????????????????????
?
0.45
HGEP
EM
0.60
?
MS
0
200 400 600 800
0.25
0.8
Synthetic
HGEP
EM
?
???????????????????????????????????????????????????????????????????????????????????????????????????
0
200 400 600 800
Figure 3: Mean reconstruction error on the held-out data as a function of the number of Gibbs scans. Lower
is better. The standard maximum likelihood estimate learned with EM outperformed our model in the simple
synthetic dataset, but the trend was reversed in the more complex real world datasets.
all experiments 5 times with different random seeds. We compared against the standard maximum
likelihood rate matrix estimator learned by EM described in [23]. We also report in Table 1 the mean
error for a simpler maximum likelihood estimate ignoring the sequential information (returning the
most common observation deterministically). See Section G of the supplementary material for detailed instructions for replicating the following three results.5 Refer also to Figure 3, where we show
error as a function of the number of scans.
Synthetic: We used an Erd?os-R?enyi model to generate a random sparse matrix of size 10 ? 10,
which we perturbed with uniform noise to get a random rate matrix. Both HGEP and the EM-learned
maximum likelihood outperformed the baseline. In contrast to the next two tasks, the EM approach
slightly outperformed the HGEP model here. We believe this is because the synthetic data was not
sufficiently rich to highlight the advantages of HGEPs. However, we compared our results with
iHMM after discretizing time. We observed that iHMM had an error rate of 0.47, underperforming
both EM and HGEP.
MS disease progression: This dataset, obtained from a phase III clinical trial, tracks the progression
of MS in 72 patients over 3 years. The observed state of a patient at a given time is binned into three
categories as customary in the MS literature [3]. Both HGEP and EM outperformed the baseline by
a large margin, and our HGEP model outperformed EM with a relative error reduction of 22%.
RNA evolution: In this task, we used the dataset from [4] containing aligned 16S ribosomal RNA
of species from the three domains of life. As a preprocessing, we constructed a rooted phylogenetic
tree from a sample of 30 species, and performed ancestral reconstruction using a standard CTMC
model and all the sampled taxa in the tree. We then considered the time series consisting of paths
from one modern leaf to the root. The task is to reconstruct held-out nucleotides using only the data
in this path. Again, both HGEP and EM outperformed the baseline, and our model outperformed
EM with a relative error reduction of 29%.
7
Conclusion
We have introduced a method for non-parametric Bayesian modeling of recurrent, continuous time
processes. The model has attractive properties and we show that the posterior computations can be
done efficiently using a sampler based on particle MCMC methods. Most importantly, our experiments show that the model is useful for analyzing complex real world time series.
Acknowledgments
We would like to thank Arnaud Doucet, John Petkau and the anonymous reviewers for helpful
comments. This work was supported by a NSERC Discovery Grant and the WestGrid cluster.
5
The code used to run these experiments is available at
http://www.stat.ubc.ca/?bouchard/GEP/
8
References
[1] C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo methods. Journal Of The
Royal Statistical Society Series B, 2010.
[2] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden Markov
model. In ICML, 2008.
[3] M. Mandel. Estimating disease progression using panel data. Biostatistics, 2010.
[4] J.J. Cannone, S. Subramanian, M.N. Schnare, J.R. Collett, L.M. D?Souza, Y. Du, B. Feng, N. Lin, L.V.
Madabusi, K.M. Muller, N. Pande, Z. Shang, N. Yu, and R.R. Gutell. The comparative RNA web (CRW)
site: An online database of comparative sequence and structure information for ribosomal, intron, and
other RNAs. BioMed Central Bioinformatics, 2002.
[5] S.N. MacEachern. Dependent nonparametric processes. In Section on Bayesian Statistical Science, American Statistical Association, 1999.
[6] J.E. Griffin. The Ornstein-Uhlenbeck Dirichlet process and other time-varying processes for Bayesian
nonparametric inference. Journal of Statistical Planning and Inference, 2008.
[7] J.E. Griffin and M.F.J. Steel. Stick-breaking autoregressive processes. Journal of Econometrics, 2011.
[8] M. F. J. Steel. The New Palgrave Dictionary of Economics, chapter Bayesian time series analysis. Palgrave Macmillan, 2008.
[9] S. Heiler. A survey on nonparametric time series analysis. CoFE Discussion Paper 99-05, Center of
Finance and Econometrics, University of Konstanz, 1999.
[10] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In Machine
Learning. MIT Press, 2002.
[11] E.B. Fox, E.B. Sudderth, M.I. Jordan, and A.S. Willsky. An hdp-hmm for systems with state persistence.
In Proceedings of the International Conference on Machine Learning, 2008.
[12] J. Van Gael, Y. W. Teh, and Z. Ghahramani. The infinite factorial hidden Markov model. In NIPS?08,
2008.
[13] P.A.P. Moran. The Theory of Storage. Methuen, 1959.
[14] M. Friesl. Estimation in the Koziol-Green model using a gamma process prior. Austrian Journal of
Statistics, 2008.
[15] V. Rao and Y. W. Teh. Spatial normalized gamma processes. In Advances in Neural Information Processing Systems, 2009.
[16] L. Kuo and S. K. Ghosh. Bayesian nonparametric inference for nonhomogeneous Poisson processes.
Technical report, University of Connecticut, Department of Statistics, 1997.
[17] J. F. C. Kingman. Poisson Processes. The Clarendon Press Oxford University Press, 1993.
[18] M. Schroder. Risk-neutral parameter shifts and derivatives pricing in discrete time. The Journal of
Finance, 2004.
[19] D. Dufresne. G distributions and the beta-gamma algebra. Electronic Journal of Probability, 2010.
[20] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 2004.
[21] R. Neal. Markov chain sampling methods for Dirichlet process mixture models. Technical report, U of T,
2000.
[22] P. Liang, S. Petrov, M. I. Jordan, and D. Klein. The infinite PCFG using hierarchical Dirichlet processes.
In Empirical Methods in Natural Language Processing and Computational Natural Language Learning
(EMNLP/CoNLL), 2007.
[23] A. Hobolth and J.L. Jensen. Statistical inference in evolutionary models of DNA sequences via the EM
algorithm. Statistical applications in Genetics and Molecular Biology, 2005.
[24] L. Mateiu and B. Rannala. Inferring complex DNA substitution processes on phylogenies using uniformization and data augmentation. Syst. Biol., 2006.
9
| 4463 |@word trial:1 version:5 instruction:1 p0:2 reduction:2 initial:1 substitution:1 series:13 ours:1 current:5 dx:2 john:1 partition:4 j1:3 informative:1 shape:1 drop:2 plot:1 update:3 resampling:1 alone:1 selected:1 leaf:1 parameterization:2 beginning:2 short:2 blei:1 complication:1 simpler:2 phylogenetic:1 constructed:3 direct:1 beta:6 qualitative:3 prove:1 consists:1 crw:1 introduce:3 indeed:2 behavior:5 frequently:1 nor:1 planning:1 remission:1 metaphor:1 jm:1 cluttering:1 considering:1 estimating:2 notation:7 moreover:2 unrelated:1 panel:1 biostatistics:1 what:4 argmin:1 finding:1 transformation:1 ghosh:1 guarantee:1 quantitative:2 ti:3 finance:2 returning:1 stick:3 exchangeable:5 connecticut:1 medical:1 grant:1 yn:1 positive:5 before:2 understood:1 t1:4 treat:1 ak:6 analyzing:2 oxford:1 establishing:1 path:2 specifying:1 challenging:1 limited:1 smc:3 range:1 adoption:1 acknowledgment:1 block:2 empirical:3 drug:1 persistence:1 word:1 suggest:1 mandel:1 get:5 cannot:1 storage:1 context:1 applying:1 risk:1 www:1 measurable:2 equivalent:1 reviewer:1 center:1 crfs:1 economics:1 independently:1 survey:2 methuen:1 simplicity:4 estimator:3 importantly:1 fill:1 financial:1 datapoints:1 biol:1 fx:1 updated:1 construction:3 suppose:1 exact:2 trend:2 element:1 approximated:1 jk:4 econometrics:2 database:1 observed:10 pande:1 cycle:2 sticky:1 ordering:1 decrease:1 disease:8 mentioned:1 reviewing:1 algebra:2 predictive:15 creation:1 creates:1 efficiency:1 translated:1 joint:1 subordination:2 chapter:1 kolmogorov:2 distinct:1 enyi:1 describe:3 monte:2 h0:14 sanity:1 supplementary:10 valued:2 say:2 relax:1 otherwise:2 reconstruct:1 statistic:11 superscript:1 online:1 beal:2 sequence:28 advantage:3 propose:1 reconstruction:3 product:1 j2:2 aligned:1 mixing:1 gep:15 cluster:1 underperforming:1 comparative:2 franchise:1 volatility:2 illustrate:1 recurrent:7 develop:1 stat:1 measured:1 auxiliary:10 come:2 beg:4 implies:1 indicate:1 stochastic:3 transient:1 material:10 fix:1 anonymous:1 proposition:8 elementary:2 extension:2 hold:1 sufficiently:1 considered:2 exp:7 seed:1 dictionary:1 a2:5 resample:2 estimation:2 outperformed:8 currently:1 visited:1 grouped:1 create:1 mit:1 rna:8 always:3 rather:1 pn:1 exchangeability:2 varying:1 focus:1 quence:1 check:2 likelihood:8 contrast:1 baseline:4 helpful:1 inference:10 dependent:2 entire:1 relapse:1 typically:2 hidden:12 doob:3 going:1 interested:2 biomed:1 issue:1 schroder:1 flexible:1 denoted:3 augment:1 spatial:2 special:2 marginal:1 field:1 equal:3 construct:2 having:1 sampling:2 biology:1 look:1 icml:1 yu:1 t2:3 recommend:1 quantitatively:1 report:3 modern:1 gamma:32 interpolate:1 individual:1 saatci:1 phase:1 consisting:1 preview:1 freedom:1 acceptance:4 evaluation:2 mixture:1 behind:2 held:5 chain:4 integral:1 heiler:1 nucleotide:1 fox:1 indexed:2 tree:2 sojourn:4 modeling:4 rao:1 tp:5 measuring:1 tg:6 introducing:1 entry:3 subset:1 neutral:1 uniform:1 successful:1 optimally:1 connect:1 perturbed:1 synthetic:6 density:5 international:1 destructively:1 ancestral:1 destination:1 off:3 concrete:1 again:1 central:1 nm:1 augmentation:1 containing:2 emnlp:1 american:2 derivative:1 kingman:2 syst:1 account:1 de:1 taxon:1 ornstein:2 performed:1 root:1 closed:3 observing:1 start:2 bayes:1 option:1 bouchard:2 efficiently:2 likewise:1 yield:4 identify:1 bayesian:10 iid:5 carlo:2 trajectory:1 holenstein:1 datapoint:1 pochhammer:1 checked:1 definition:4 ihmm:3 against:1 petrov:1 proof:8 propagated:1 sampled:3 dataset:3 recall:1 back:2 alexandre:1 clarendon:1 nonconjugate:1 specify:1 erd:1 done:3 generality:1 crp:2 web:1 o:1 widespread:1 westgrid:1 pricing:1 believe:1 building:1 effect:4 name:2 normalized:3 evolution:1 hence:1 andrieu:1 arnaud:1 neal:1 attractive:3 conditionally:2 self:2 rooted:1 unnormalized:3 m:8 generalized:1 crf:2 demonstrate:1 tn:1 meaning:1 common:1 pseudocode:1 multinomial:1 ctmc:1 conditioning:3 exponentially:1 association:2 interpretation:1 marginals:2 measurement:5 refer:1 gibbs:4 consistency:2 similarly:1 particle:8 replicating:1 had:1 language:2 atomically:1 longer:2 base:6 posterior:6 recent:1 dish:1 store:1 discretizing:1 arbitrarily:1 alternation:1 life:1 muller:1 captured:1 minimum:1 additional:3 fortunately:1 recognized:1 period:1 multiple:5 desirable:1 infer:1 technical:2 characterized:2 clinical:1 long:1 lin:1 molecular:1 visit:2 a1:12 qg:1 qi:5 prediction:1 basic:1 austrian:1 patient:7 expectation:1 poisson:2 iteration:2 normalization:4 uhlenbeck:2 kernel:1 beam:2 proposal:5 addition:1 background:1 want:1 whereas:1 interval:1 sudderth:1 ot:1 probably:1 comment:1 undergo:1 collett:1 call:1 jordan:3 presence:1 iii:1 easy:1 enough:3 restaurant:4 mgp:8 connexion:1 idea:2 cousin:1 shift:1 motivated:1 expression:2 handled:1 hobolth:1 jj:9 repeatedly:1 useful:3 generally:1 se:1 listed:1 informally:2 factorial:2 detailed:2 gael:2 nonparametric:4 category:1 dna:2 generate:2 http:1 disjoint:1 hdps:1 track:2 klein:1 discrete:5 hyperparameter:1 waiting:13 finetti:1 group:1 four:1 terminology:1 saeedi:1 year:2 sum:3 run:1 powerful:1 family:8 throughout:1 electronic:1 griffin:2 conll:1 ddp:1 distinguish:1 display:2 nonnegative:1 binned:1 simulate:1 argument:2 min:2 forgetful:1 department:2 according:2 sclerosis:5 conjugate:4 across:1 slightly:1 em:14 character:1 making:1 mgps:2 ardavan:1 taken:2 equation:1 conjugacy:4 previously:1 discus:1 count:2 turn:1 describing:1 tractable:1 end:4 available:5 progression:8 hierarchical:17 shortly:1 customary:1 jn:9 existence:1 uncountable:4 dirichlet:10 include:1 top:1 graphical:1 marginalized:1 ghahramani:3 chinese:2 build:1 especially:1 society:1 feng:1 move:2 question:1 parametric:8 concentration:1 diagonal:4 evolutionary:3 reversed:1 thank:1 hmm:3 reason:1 ctmps:3 willsky:1 hdp:3 length:1 code:1 index:4 illustration:1 ratio:2 liang:1 setup:1 holding:1 negative:1 stated:1 steel:2 countable:8 teh:4 observation:9 markov:7 datasets:4 finite:3 situation:4 arbitrary:1 uniformization:1 souza:1 introduced:4 pair:2 specified:1 connection:2 learned:4 nip:1 address:1 below:1 xm:2 royal:1 green:1 analogue:1 gillespie:3 event:11 subramanian:1 difficulty:1 natural:2 telescoping:1 created:1 carried:2 concludes:1 unconditionally:1 columbia:1 prior:16 review:1 literature:1 discovery:1 relative:2 loss:1 fully:1 heldout:1 highlight:1 generation:2 proportional:2 degree:1 sufficient:4 consistent:1 pareto:1 share:2 row:11 genetics:1 summary:2 pmcmc:1 supported:1 keeping:1 rasmussen:1 formal:1 allow:1 understand:1 explaining:1 sparse:2 distributed:6 slice:1 van:2 transition:6 valid:1 world:2 autoregressive:2 rich:1 jump:5 preprocessing:1 reconstructed:1 approximate:1 countably:1 doucet:2 active:1 alternatively:1 subsequence:2 continuous:9 latent:3 table:7 disambiguate:1 learn:1 gutell:1 nonhomogeneous:1 ca:1 ignoring:1 du:1 complex:5 constructing:1 domain:2 main:2 motivation:1 noise:1 repeated:2 site:1 depicts:1 inferring:1 deterministically:1 exponential:10 breaking:2 british:1 emphasizing:1 theorem:3 xt:1 showing:1 intron:1 jensen:1 moran:7 list:2 symbol:1 admits:1 survival:1 kh0:8 essential:1 incorporating:1 sequential:2 pcfg:1 magnitude:1 ribosomal:2 margin:1 ddps:2 gap:1 led:1 simply:1 nserc:1 macmillan:1 partially:3 ubc:1 viewed:3 goal:2 shared:1 infinite:11 except:1 sampler:2 lemma:4 shang:1 called:3 total:1 specie:2 accepted:3 tendency:1 kuo:1 formally:4 phylogeny:1 support:5 maceachern:1 latter:1 scan:3 bioinformatics:1 avoiding:1 evaluate:2 mcmc:7 instructive:1 |
3,826 | 4,464 | Quasi-Newton Methods
for Markov Chain Monte Carlo
Yichuan Zhang and Charles Sutton
School of Informatics
University of Edinburgh
[email protected], [email protected]
Abstract
The performance of Markov chain Monte Carlo methods is often sensitive to the
scaling and correlations between the random variables of interest. An important
source of information about the local correlation and scale is given by the Hessian
matrix of the target distribution, but this is often either computationally expensive
or infeasible. In this paper we propose MCMC samplers that make use of quasiNewton approximations, which approximate the Hessian of the target distribution
from previous samples and gradients generated by the sampler. A key issue is that
MCMC samplers that depend on the history of previous states are in general not
valid. We address this problem by using limited memory quasi-Newton methods,
which depend only on a fixed window of previous samples. On several real world
datasets, we show that the quasi-Newton sampler is more effective than standard
Hamiltonian Monte Carlo at a fraction of the cost of MCMC methods that require
higher-order derivatives.
1
Introduction
The design of effective approximate inference methods for continuous variables often requires considering the curvature of the target distribution. This is especially true of Markov chain Monte Carlo
(MCMC) methods. For example, it is well known that the Gibbs sampler mixes extremely poorly
on distributions that are strongly correlated. In a similar way, the performance of a random walk
Metropolis-Hastings algorithm is sensitive to the variance of the proposal distribution. Many samplers can be improved by incorporating second-order information about the target distribution. For
example, several authors have used a Metropolis-Hastings algorithm in which the Hessian is used
to form a covariance for a Gaussian proposal [3, 11]. Recently, Girolami and Calderhead [5] have
proposed a Hamiltonian Monte Carlo method that can require computing higher-order derivatives of
the target distribution.
Unfortunately, second derivatives can be inconvenient or infeasible to obtain and the quadratic cost
of manipulating a d ? d Hessian matrix can also be prohibitive. An appealing idea is to approximate
the Hessian matrix using the sequence of first order information of previous samples, in a manner
similar to quasi-Newton methods from the optimization literature. However, samplers that depend on
the history of previous samples must be carefully designed in order to guarantee the chain converges
to the target distribution.
In this paper, we present quasi-Newton methods for MCMC that are based on approximations to the
Hessian from first-order information. In particular, we present a Hamiltonian Monte Carlo algorithm
in which the variance of the momentum variables is based on a BFGS approximation. The key point
is that we use a limited memory approximation, in which only a small window of previous samples
are used to the approximate the Hessian. This makes it straightforward to show that our samplers are
valid, because the samples are distributed as a order-k Markov chain. Second, by taking advantage
1
of the special structure in the Hessian approximation, the samplers require only linear time and
linear space in the dimensionality of the problem. Although this is a very natural approach, we are
unaware of previous MCMC methods that use quasi-Newton approximations. In general we know
of very few MCMC methods that make use of the rich set of approximations from the numerical
optimization literature (some exceptions include [7, 11]). On several logistic regression data sets,
we show that the quasi-Newton samplers produce samples of higher quality than standard HMC, but
with significantly less computation time than methods that require higher-order derivatives.
2
Background
In this section we provide background on Hamiltonian Monte Carlo. An excellent recent tutorial
is given by Neal [9]. Let x be a random variable on state space X = Rd with a target probability
distribution ?(x) ? exp(L(x)) and p be a Gaussian random variable on P = Rd with density
p(p) = N (p|0, M) where M is the covariance matrix. In general, Hamiltonian Monte Carlo
(HMC) defines a stationary Markov chain on the augmented state space X ? P with invariant distribution p(x, p) = ?(x)p(p). The sampler is defined using a Hamiltonian function, which up to a
constant is the negative log density of (x, p), given as follows:
1
H(x, p) = ? L(x) + pT M?1 p.
(1)
2
In an analogy to physical systems, the first term on the RHS is called the potential energy, the second
term is called the kinetic energy, the state x is called the position variable, and p the momentum
variable. Finally, we will call the covariance M the mass matrix. The most common mass matrix
is the identity matrix I. Samples in HMC are generated as following. First, the state p is resampled
from its marginal distribution N (p|0, M). Then, given the current state (x, p), a new state (x? , p? )
is generated by a deterministic simulation of Hamiltonian dynamics:
x? = M?1 p; p? = ? ?x L(x).
(2)
One common approximation to this dynamical system is given by the leapfrog algorithm. One single
iteration of leapfrog algorithm is given by the recursive formula
(3)
p(? + ) = p(? ) + ?x L(x(? )),
2
2
x(? + ) = x(? ) + M?1 p(? + ),
(4)
2
(5)
p(? + ) = p(? + ) + ?x L(x(? + )),
2
2
where is the step size and ? is a discrete time variable. The leapfrog algorithm is initialised by
the current sample, that is (x(0), p(0)) = (x, p). After L leapfrog steps (3)-(5), the final state
(x(L), p(L)) is used as the proposal (x? , p? ) in Metropolis-Hastings correction with acceptance
probability min[1, exp(H(x, p) ? H(x? , p? ))]. The step size and the number of leapfrog steps L
are two parameters of HMC.
In many applications, different components of x may have different scale and be highly correlated.
Tuning HMC in such a situation can be very difficult. However, the performance of HMC can be
improved by multiplying the state x by a non-singular matrix A. If A is chosen well, the transformed
state x0 = Ax may at least locally be better conditioned, i.e., the new variables x0 may be less
correlated and have similar scale, so that sampling can be easier. In the context of HMC, this
transformation is equivalent to changing mass matrix M. This is because the Hamiltonian dynamics
of the system (Ax, p) with mass matrix M are isomorphic to the dynamics on (x, AT p), which
is equivalent to defining the state as (x, p) and using the mass matrix M0 = AT MA. For a more
detailed version of this argument, see the tutorial of Neal [9]. So in this paper we will concentrate
on tuning M on the fly during sampling.
Now, if L has a constant Hessian B (or nearly so), then a reasonable choice of transformation is to
choose A so that B = AAT , because then the Hessian of the log density over x0 will be nearly the
identity. This corresponds to a choice of M = B. For more general functions without a constant
Hessian, this argument suggests the idea of employing a mass matrix M(x) that is a function of the
position. In this case the Hamiltonian function can be
1
1
?1
H(x, p) = ? L(x) + log(2?)d |M(x)| + pT M(x) p,
(6)
2
2
2
where the second term on the RHS is from the normalisation factor of Gaussian momentum variable.
3
Quasi-Newton Approximations for Sampling
In this section, we describe the Hessian approximation that is used in our samplers. It is based on
the well-known BFGS approximation [10], but there are several customizations that we must make
to use it within a sampler. First we explain quasi-Newton methods in the context of optimization.
Consider minimising the function f : Rd ? R, quasi-Newton methods search for the minimum of
f (x) by generating a sequence of iterates xk+1 = xk ??k Hk ?f (xk ) where Hk is an approximation
to the inverse Hessian at xk , which is computed from the previous function values and gradients.
One of the most popular large scale quasi-Newton methods is limited-Memory BFGS (L-BFGS) [10].
Given the previous m iterates xk?m+1 , xk?m+2 , . . . xk , the L-BFGS approximation Hk+1 is
yk sT
sk yT
Hk+1 = (I ? T k )Hk (I ? T k ) + sk sTk
(7)
sk yk
sk yk
where sk = xk+1 ? xk and yk = ?fk+1 ? ?fk . The base case of the recursion is typically chosen
as Hk?m = ?I for some ? ? R. If m = k, then this is called the BFGS formula, and typically it is
implemented by storing the full d ? d matrix Hk . If m < k, however, this is called limited-memory
BFGS, and can be implemented much more efficiently. It can be seen that the BFGS formula (7) is
a rank-two update to the previous Hessian approximation Hk . Therefore Hk+1 is a diagonal matrix
plus a rank 2m matrix, so the matrix vector product Hk ?f (xk ) can be computed in linear time
O(md). Typically the product Hv is implemented by a special two-loop recursive algorithm [10].
In contrast to optimization methods, most sampling methods need a factorized form of Hk to draw
samples from N (0, Hk ). More precisely, we adopt the factorisation Hk = Sk STk , so that we can
generate a sample as p = Sk z where z ? N (0, I). The matrix operations to obtain Sk , e.g. the
Cholesky decomposition cost O(d3 ). To avoid this cost, we need a way to compute Sk that does not
require constructing the matrix Hk explicitly. Fortunately there is a variant of the BFGS formula
that maintains Sk directly [2], which is
Hk+1 = Sk+1 STk+1 ; Sk+1 = I ? pk qTk Sk
(8)
Bk+1 = Ck+1 CTk+1 ; Ck+1 = I ? uk tTk Ck
(9)
s
sTk yk
sk
Bk sk ? yk
(10)
pk = T ; qk =
T
sk yk
sk Bk yk
s
sTk Bk sk
sk
tk = T
; uk =
yk + Bk sk
(11)
sk Bk sk
sTk yk
where Bk = H?1
k denotes the Hessian matrix approximation. Again, we will use a limited-memory
version of these updates, in which the recursion is stopped at Hk?m = ?I.
As for the running time of the above approximation, computing Sk requires O(m2 d) time and
O(md) space, so it is still linear in the dimensionality. The matrix vector product Sk+1 z can be
Qk
computed by a sequence of inner products Sk+1 z = i=k?m?1 (I ? pi qTi )Sk?m z, in time O(md).
A second issue is that we need Hk to be positive definite if it is to be used as a covariance matrix. It
can be shown [10] that Hk is positive definite if for all i ? (k ? m + 1, k), we have sTi yi > 0. For a
convex function f , an optimizer can be arranged so that this condition always holds, but we cannot
do this in a sampler. Instead, we first sort the previous samples {xi } in ascending order with respect
to L(x), and then check if there are any adjacent pairs (xi , xi+1 ) such that the resulting si and yi
have sTi yi ? 0. If this happens, we remove the point xi+1 from the memory and recompute si , yi
using xi+2 , and so on. In this way we can ensure that Hk is always positive definite.
Although we have described BFGS as relying on a memory of ?previous? points, e.g., previous
iterates of an optimization algorithm, or previous samples of an MCMC chain, in principle the
BFGS equations could be used to generate a Hessian approximation from any set of points X =
{x1 . . . xm }. To emphasize this, we will write HBFGS : X 7? Hk for the function that maps a
?pseudo-memory? X to the inverse Hessian Hk . This function first sorts x ? X by L(xi ), then
computes si = xi+1 ? xi and yi = ?L(xi+1 ) ? ?L(xi ), then filters xi as described above so that
sTi yi > 0 ?i, and finally computes the Hessian approximation Hk using the recursion (8)?(11).
3
4
Quasi-Newton Markov Chain Monte Carlo
In this section, we describe two new quasi-Newton samplers. They will both follow the same structure, which we describe now. Intuitively, we want to use the characteristics of the target distribution
to accelerate the exploration of the region with high probability mass. The previous samples provide information about the target distribution, so it is reasonable to use them to adapt the kernel.
However, naively tuning the sampling parameters using all previous samples may lead to an invalid
chain, that is, a chain that does not have ? as its invariant distribution.
Our samplers will use a simple solution to this problem. Rather than adapting the kernel using all
of the previous samples in the Markov chain, we will adapt using a limited window of K previous
samples. The chain as a whole will then be an order K Markov chain. It is easiest to analyze
this chain by converting it into a first-order Markov chain over an enlarged space. Specifically, we
K
build
Q a Markov chain in a K-fold product space X with the stationary distribution p(x1:K ) =
i=1:K ?(xi ). We denote a state of this chain by xt?K+1 , xt?K+2 , . . . , xt . We use the short-hand
(t)
(t)
(t)
notation x1:K\i for the subset of x1:K excluding the xi .
(t)
Our samplers will then update one component of x1:K per iteration, in a Gibbs-like fashion. We
(t)
define a transition kernel Ti that only updates the ith component of x1:K , that is:
(t)
(t)
(t)
Ti (x1:K , x01:K ) = ?(x1:K\i , x01:K\i )B(xi , x0i |x1:K\i ),
(12)
where B(xi , x0i |x1:K\i ) is called the base kernel that is a MCMC kernel in X and adapts with
(t)
x1:K\i . If B leaves ?(xi ) invariant for all fixed values of x1:K\i , it is straightforward to show that
(t)
Ti leaves p invariant. Then, the sampler as a whole updates each of the components xi in sequence,
so that the method as a whole is described by the kernel
T (x1:K , x01:K ) = T1 ? T2 . . . ? TK (x1:K , x01:K ),
(13)
where Ti ? Tj denotes composition of kernels Ti and Tj . Because the each kernel Ti leaves p(x1:K )
invariant, the composition kernel T also leaves p(x1:K ) invariant. Such an adaptive scheme is
equivalent to using an ensemble of K chains and changing the kernel of each chain with the state
of the others. It is called the ensemble-chain adaptation (ECA) in this paper. One early example of
ECA is found in [4]. To simplify the analysis of the validity of the chain, we assume the base kernel
B is irreducible in one iteration. This assumption can be satisfied by many popular MCMC kernels.
4.1
Using BFGS within Metropolis-Hastings
A simple way to incorporate quasi-Newton approximations within MCMC is to use the MetropolisHastings (M-H) algorithm. The intuition is to fit the Gaussian proposal distribution to the target
distribution, so that points in a high probability region are more likely to be proposed. We will
call this algorithm M H B FGS. Specifically, the proposal distribution of M H B FGS is defined as
(t)
(t)
(t)
q(x0 |x1:K ) = N (x0 ; ?, ?), where the proposal mean ? = ?(x1:K ) and covariance ? = ?(x1:K )
depend on the state of all K chains.
Several choices for the mean function are possible. One simple choice is to use one of the samples
(t)
(t)
in the window as the mean, e.g., ?(x1:K ) = x1 . Another potentially better choice is a Newton
step from xt . For the covariance function at ?, we will use the BFGS approximation ?(x1:K ) =
HBFGS (x1:K ). The proposal x0 of T1 is accepted with probability
!
(t) (t)
q(x1 |x2:K , x0 )
?(x0 )
(t)
0
?(x1 , x ) = min 1,
.
(14)
(t)
(t)
(t)
?(x1 )
q(x0 |x1 , x2:K )
(t)
If x0 is rejected, x1 is duplicated as the new sample. Because the Gaussian proposal
(t)
(t)
q(A|x1 , x2:K ) has positive probability for all A ? X , the M-H kernel is irreducible within one
iteration. Because the M-H algorithm with acceptance ratio defined as (14) leaves ?(x) invariant,
M H B FGS is a valid method that leaves p(x1:K ) invariant. Although M H B FGS is simple and intuitive, in preliminary experiments we have found that M H B FGS sampler may converge slowly in high
4
Algorithm 1
H MC B FGS
(t)
(t)
(t)
Input: Current memory (x1 , x2 , . . . , xK )
(t+1)
(t+1)
(t+1)
Output: Next memory (x1
, x2
, . . . , xK )
(t)
1: p ? N (0, BBFGS (x2:K ))
(t)
2: (x? , p? ) ?Leapfrog(x1 , p) using (3)-(5)
3: u ? Unif[0, 1]
(t)
4: if u ? exp(H(x1 , p|x2:K ) ? H(x? , p? |x2:K )) then
(t+1)
?
5:
xK
?x
6: else
(t+1)
(t)
7:
xK
? xK
8: end if
(t+1)
(t)
9: x1:K?1 ? x2:K
(t+1)
10: return (x1
(t+1)
, x2
(t+1)
, . . . , xK
)
dimensions. In general Metropolis Hastings with a Gaussian proposal can suffer from random walk
behavior, even if the true Hessian is used. For this reason, next we incorporate the BFGS into a more
sophisticated sampling algorithm.
4.2
Using BFGS within Hamiltonian Monte Carlo
Better convergence speed can be achieved by incorporating BFGS within the HMC kernel. The
high-level idea is to start with the M H B FGS algorithm, but to replace the Gaussian proposal with a
simulation of Hamiltonian dynamics. However, we will need to be a bit careful in order to ensure that
the Hamiltonian is separable, because otherwise we would need to employ a generalized leapfrog
integrator [5] which is significantly more expensive.
The new samples in H MC B FGS are generated as follows. As before we update one component of
(t)
x1:K at a time. Say that we are updating component i. First we sample a new value of the momentum
(t)
variable p ? N (0, BBFGS (x1:K\i )). It is important that when constructing the BFGS approximation,
(t)
we not use the value xi that we are currently resampling. Then we simulate the Hamiltonian
(t)
dynamics starting at the point (xi , p) using the leapfrog method (3)?(5). The Hamiltonian energy
used for this dynamics is simply
1
(t)
(t)
(t)
(15)
Hi (x1:K , p) = ? L(xi ) + pT HBFGS (x1:K\i )?1 p,
2
?
?
This yields a proposed value (x , p ). Finally, the proposal is accepted with probability
min[1, exp(H(xi , p) ? H(x?i , p? )], for H in (15) and p? is discarded after M-H correction. This
procedure is summarized in Algorithm 1.
This procedure is an instance of the general ECA scheme described above, with base kernel
Z
? i , pi , x0i , p0i |x1:K\i )dpi dp0i .
B(xi , x0i |x1:K\i ) = B(x
? i , pi , x0 , p0 |x1:K\i ) is a standard HMC kernel with mass matrix BBFGS (x1:K\i ) that inwhere B(x
i
i
? given by (15) is separable, that means
cludes sampling pi . The Hamiltonian energy function of B
xi only appear in potential energy.
It
is
easy
to
see
that
B
is
a valid kernel in X , so as a ECA method,
Q
H MC B FGS leaves p(x1:K ) = i ?(xi ) invariant.
It is interesting to consider if the method is valid in the augmented space X K ? P K , i.e., whether
Algorithm 1 leaves the distribution
p(x1:K , p1:K ) =
K
Y
(t)
?(xi )N (pi ; 0, BBFGS (x1:K\i ))
i=1
Interestingly, this is not true, because every update to xi changes the Gaussian factors for the momentum variables pj for j 6= i in a way that the Metropolis Hastings correction in lines 4?8 does
not consider. So despite the auxiliary variables, it is easiest to establish validity in the original space.
5
H MC B FGS has the advantages of being a simple approach that only uses gradient and the computational efficiency that the cost of all matrix operations (namely in lines 1 and 2 of Algorithm 1)
is at the scale of O(Kd). But, being an ECA method, H MC B FGS has the disadvantage that the
larger the number of chains K, the updates are ?spread across? the chains, so that each chain gets a
small number of updates during a fixed amount of computation time. In Section 6 we will evaluate
empirically whether this potential drawback is outweighed by the advantages of using approximate
second-order information.
5
Related Work
Girolami and Calderhead [5] propose a new HMC method called Riemannian manifold Hamiltonian
Monte Carlo (RMHMC) where M(x) can be any positive definite matrix. In their work, M(x) is
chosen to be the expected Fisher information matrix and the experimental results show that RMHMC
can converge much faster than many other MCMC methods. Girolami and Calderhead adopted
a generalised leapfrog method that is a reversible and volume-preserving approximation to nonseparable Hamiltonian. However, such a method may require computing third-order derivatives of
L, which can be infeasible in many applications.
Barthelme and Chopin [1] pointed out the possibility to use approximate BFGS Hessian in RMHMC
for computational efficiency. Similarly, Roy [14] suggested iteratively updating the local metric
approximation. Roy also emphasized the potential effect of such an iterative approximation to the
validity, a main problem that we address here. An early example of ECA is adaptive direction
sampling (ADS) [4], in which each sample is taken along a random direction that is chosen based
on the samples from a set of chains. However, the validity of ADS can be established only when the
size of ensemble is greater than the number of dimensions, otherwise the samples are trapped in a
subspace. H MC B FGS avoids this problem because the BFGS Hessian approximation is full rank.
There has been a large amount of interest in adaptive MCMC methods that accumulate information from all previous samples. These methods must be designed carefully because if the kernel is
adapted with the full sampling history in a naive way, the sampler can be invalid [13]. A well known
example of a correct adaptive algorithm is the Adaptive Metropolis [6] algorithm, which adapts the
Gaussian proposal of a Metropolis Hastings algorithm based on the empirical covariance of previous samples in a way that maintains ergodicity. Being a valid method, the adaptation of kernel must
keep decreasing over time. In practice, the parameters of the kernel in many diminishing adaptive
methods converge to a single value over the entire state space. This could be problematic if we want
the sampler to adapt to local characteristics of the target distribution, e.g., if different regions of the
target distribution have different curvature. Using a finite memory of recent samples, our method
avoids such a problem.
6
Experiments
We test H MC B FGS on two different models, Bayesian logistic regression and Bayesian conditional
random fields (BCRFs). We compare H MC B FGS to the standard HMC which uses identity mass
matrix and RMHMC which requires computing the Hessian matrix. All methods are implemented in
Java 1 . We do not report results from M H B FGS because preliminary experiments showed that it was
much worse than either HMC or H MC B FGS. The datasets for Bayesian logistic regression is used
for RMHMC in [5]. For HMC and H MC B FGS we employ the random step size ? Unif[0.9?
, ?],
where ? is the maximum step size. For RMHMC, we used the fixed = 0.5 for all datasets that
follows the setting in [5].
For HMC and H MC B FGS we tuned L on one data set (the German data set) and used that value
on all datasets. We chose the smallest number of leaps that did not degrade the performance of the
sampler. L was chosen to be 40 for HMC and to be 20 for H MC B FGS. For RMHMC, we employed
L = 6 leaps in RMHMC, following Girolami and Calderhead [5]. For H MC B FGS, we heuristically
chose the number of ensemble chains K to be slightly higher than d/2.
1
Our implementation was based on the Matlab code of RMHMC of Girolami and Calderhead and checked
against the original Matlab version
6
For each method, we drew 5000 samples after 1000 burn-in samples. The convergence speed is
measured by effective sample size (ESS) [5], which summaries the amount of autocorrelation across
different lags over all dimensions2 . The more detailed description of ESS can be found in [5]. Because H MC B FGS displays more correlation within individual chain than across chains, we calculate
the ESS separately for individual chains in the ensemble and the overall ESS is simply the sum of
that from individual chains. All the final ESS on each data set is obtained by averaging over 10 runs
using different initialisations.
ESS
HMC
H MC B FGS
RMHMC
Min
Mean
Max
Time (s)
ES/s
3312
3862
4445
7.56
739
3643
4541
4993
4.74
1470
4819
4950
5000
483.00
107
Table 1: Performance of MCMC samplers on Bayesian logistic regression, as measured by Effective
Sample Size (ESS). Higher is better. Averaged over five datasets. ES/s is the number of effective
samples per second
Dataset
Australian
German
Heart
Pima
Ripley
D
15
25
14
8
7
N
690
1000
532
270
250
HMC
396
255
1054
591
1396
H MC B FGS
818
397
2009
1383
2745
RMHMC
18
3
54
118
344
Table 2: Effective samples per second on Bayesian logistic regression. D is the number of regression
coefficients and N is the size of training data set
The results on ESS averaged over five datasets on Bayesian logistic regression are given by Table 1.
Our ESS number of HMC and RMHMC basically replicates the results in [5]. RMHMC achieves
the highest minimum and mean and maximum ESS and that are all very close to the total number of
samples 5000. However, because HMC and our method only require computing the gradient, they
outperforms RMHMC in terms of mean ESS per second. H MC B FGS gains a 10%, 17% and 12%
increase in minimum, mean and maximum ESS than HMC, but only needs half number of leaps for
HMC. A detailed performance of methods over datasets is shown in Table 2.
The second model that we use is a Bayesian CRF on a small natural language dataset of FAQs from
Usenet [8]. A linear-chain CRF is used with Gaussian prior on the parameters. The model has 120
parameters. This model has been used previously [12, 15]. In a CRF it is intractable to compute
the Hessian matrix exactly, so RMHMC is infeasible. For H MC B FGS we use K = 5 ensemble
chains. Each method is also tested 10 times with different initial points. For each chain we draw
8000 samples with 1000 burn-in. We use the step size = 0.02 and the number of leaps L = 10
for both HMC and H MC B FGS. This parameter setting gives 84% acceptance rate on both HMC and
H MC B FGS (averaged over the 10 runs).
Figure 1 shows the sample trajectory plots for HMC and H MC B FGS on seven randomly selected
dimensions. It is clear that H MC B FGS demonstrates remarkably less autocorrelation than HMC.
The statistics of ESS in Table 3 gives a quantitative evaluation of the performance of HMC and
H MC B FGS. The results suggest that BFGS approximation dramatically reduces the sample autocorrelation with a small increase of computational overhead on this dataset.
Finally, we evaluate the scalability of the methods on the highly correlated 1000 dimensional Gaussian N (0, 11T + 4). Using an ensemble of K = 5 chains, the samples from H MC B FGS are less
correlated than HMC along the largest eigenvalue direction (Figure 2).
2
We use the code from [5] to compute ESS of samples
7
ESS
HMC
H MC B FGS
Min
Mean
Max
Time (s)
ES/h
3
9
25
35743
1
26
438
5371
37387
42
Table 3: Performance of MCMC samplers on Bayesian CRFs, as measured by Effective Sample
Size (ESS). Higher is better. ES/h is the number of effective samples per hour
20
15
15
10
10
5
5
0
?5
0
?10
?5
?15
?20
?10
?25
?30
?15
0
1000
2000
3000
4000
5000
6000
7000
0
1000
2000
3000
4000
5000
6000
7000
Figure 1: Sample trace plot of 7000 samples from the posterior of a Bayesian CRF using HMC
(left) and our method H MC B FGS (rigt) from a single run of each sampler (each line represents a
dimension)
Sample Autocorrelation Function
Sample Autocorrelation Function
1
Sample Autocorrelation
Sample Autocorrelation
0.8
0.5
0
0.6
0.4
0.2
0
?0.5
0
20
40
60
80
100
Lag
120
140
160
180
?0.2
200
0
20
40
60
80
100
Lag
120
140
160
180
200
Figure 2: ACF plot of samples projected on to the direction of largest eigenvector of 1000 dimensional Gaussian using HMC(left) and H MC B FGS(right)
7
Discussion
To the best of our knowledge, this paper presents the first adaptive MCMC methods to employ quasiNewton approximations. Naive attempts at combining these ideas (such as M H B FGS) do not work
well. On the other hand, H MC B FGS is more effective than the state-of-the-art sampler on several
real world data sets. Furthermore, H MC B FGS works well on a high dimensional model, where full
second-order methods are infeasible, with little extra overhead over regular HMC.
As far as future work, our current method may not work well in regions where the density is not
convex, because the true Hessian is not positive definite. Another potential issue, the asymptotic
independence between the chains in ECA methods may lead to poor Hessian approximations. On
a brighter note, our work raises the interesting possibility that quasi-Newton methods, which are
almost exclusively used within the optimization literature, may be useful more generally.
Acknowledgments
We thank Iain Murray for many useful discussions, and Mark Girolami for detailed comments on an
earlier draft.
8
References
[1] S. Barthelme and N. Chopin. Discussion on Riemannian Manifold Hamiltonian Monte Carlo.
Journal of the Royal Statistical Society, B (Statistical Methodology), 73:163?164, 2011. doi:
10.1111/j.1467-9868.2010.00765.x.
[2] K. Brodlie, A. Gourlay, and J. Greenstadt. Rank-one and rank-two corrections to positive
definite matrices expressed in product form. IMA Journal of Applied Mathematics, 11(1):
73?82, 1973.
[3] S. Chib, E. Greenberg, and R. Winkelmann. Posterior simulation and bayes factors in
panel count data models. Journal of Econometrics, 86(1):33?54, June 1998. URL http:
//ideas.repec.org/a/eee/econom/v86y1998i1p33-54.html.
[4] W. R. Gilks, G. O. Roberts, and E. I. George. Adaptive direction sampling. The Statistician,
43(1):179?9, 1994.
[5] M. Girolami and B. Calderhead. Riemannian manifold Hamiltonian Monte Carlo (with discussion). Journal of the Royal Statistical Society, B (Statistical Methodology), 73:123?214, 2011.
doi: 10.1111/j.1467-9868.2010.00765.x.
[6] H. Haario, E. Saksman, and J. Tamminen. An adaptive Metropolis algorithm. Bernoulli, 7(2):
223?242, 2001.
[7] J. S. Liu, F. Liang, and W. H. Wong. The multiple-try method and local optimization in
Metropolis sampling. Journal of the American Statistical Association, 95(449):pp. 121?134,
2000.
[8] A. McCallum. Frequently asked questions data set.
?mccallum/data/faqdata.
http://www.cs.umass.edu/
[9] R. M. Neal. MCMC using Hamiltonian dynamics. In S. Brooks, A. Gelman, G. Jones, and
X.-L. Meng, editors, Handbook of Markov Chain Monte Carlo. Chapman & Hall / CRC Press,
2010.
[10] J. Nocedal and S. J. Wright. Numerical Optimization. Springer-Verlag, New York, 1999. ISBN
0-387-98793-2.
[11] Y. Qi and T. P. Minka. Hessian-based Markov chain Monte Carlo algorithms. In First Cape
Cod Workshop on Monte Carlo Methods, September 2002.
[12] Y. Qi, M. Szummer, and T. P. Minka. Bayesian conditional random fields. In Artificial Intelligence and Statistics (AISTATS). Barbados, January 2005.
[13] G. O. Roberts and J. S. Rosenthal. Coupling and ergodicity of adaptive mcmc. Journal of
Applied Probability, 44(2):458?475, 2007.
[14] D. M. Roy. Discussion on Riemannian Manifold Hamiltonian Monte Carlo. Journal of the
Royal Statistical Society, B (Statistical Methodology), 73:194?195, 2011. doi: 10.1111/j.
1467-9868.2010.00765.x.
[15] M. Welling and S. Parise. Bayesian random fields: The Bethe-Laplace approximation. In
Uncertainty in Artificial Intelligence (UAI), 2006.
9
| 4464 |@word version:3 unif:2 heuristically:1 simulation:3 covariance:7 decomposition:1 p0:1 initial:1 liu:1 exclusively:1 uma:1 initialisation:1 tuned:1 interestingly:1 outperforms:1 current:4 si:3 must:4 numerical:2 remove:1 designed:2 plot:3 update:9 resampling:1 stationary:2 half:1 prohibitive:1 leaf:8 selected:1 intelligence:2 xk:16 es:16 haario:1 ith:1 hamiltonian:21 short:1 mccallum:2 iterates:3 recompute:1 draft:1 org:1 zhang:2 five:2 along:2 overhead:2 autocorrelation:7 manner:1 x0:11 expected:1 behavior:1 p1:1 nonseparable:1 frequently:1 integrator:1 relying:1 decreasing:1 little:1 window:4 considering:1 notation:1 panel:1 mass:9 factorized:1 easiest:2 eigenvector:1 transformation:2 guarantee:1 pseudo:1 quantitative:1 every:1 ti:6 exactly:1 demonstrates:1 uk:4 appear:1 before:1 generalised:1 positive:7 local:4 t1:2 aat:1 despite:1 sutton:1 usenet:1 meng:1 plus:1 chose:2 burn:2 suggests:1 tamminen:1 limited:6 averaged:3 acknowledgment:1 gilks:1 recursive:2 practice:1 definite:6 procedure:2 empirical:1 significantly:2 adapting:1 java:1 regular:1 suggest:1 get:1 cannot:1 close:1 gelman:1 context:2 wong:1 www:1 equivalent:3 deterministic:1 map:1 yt:1 crfs:1 straightforward:2 starting:1 convex:2 factorisation:1 m2:1 iain:1 laplace:1 target:12 pt:3 us:2 roy:3 expensive:2 updating:2 econometrics:1 fly:1 hv:1 calculate:1 region:4 highest:1 yk:10 intuition:1 asked:1 dynamic:7 parise:1 depend:4 raise:1 calderhead:6 efficiency:2 accelerate:1 effective:9 describe:3 monte:17 doi:3 cod:1 artificial:2 lag:3 larger:1 say:1 otherwise:2 statistic:2 final:2 sequence:4 advantage:3 eigenvalue:1 isbn:1 propose:2 product:6 adaptation:2 loop:1 combining:1 poorly:1 adapts:2 intuitive:1 description:1 scalability:1 saksman:1 convergence:2 produce:1 generating:1 converges:1 tk:2 coupling:1 ac:2 measured:3 x0i:4 school:1 implemented:4 auxiliary:1 c:1 australian:1 girolami:7 concentrate:1 direction:5 drawback:1 correct:1 filter:1 exploration:1 crc:1 require:7 preliminary:2 correction:4 hold:1 hall:1 wright:1 exp:4 m0:1 achieves:1 optimizer:1 adopt:1 early:2 smallest:1 leap:4 currently:1 sensitive:2 largest:2 ttk:1 gaussian:12 always:2 ck:3 rather:1 avoid:1 ax:2 june:1 leapfrog:9 rank:5 check:1 bernoulli:1 hk:22 contrast:1 inference:1 typically:3 entire:1 diminishing:1 manipulating:1 quasi:15 transformed:1 chopin:2 issue:3 overall:1 html:1 art:1 special:2 marginal:1 field:3 sampling:11 chapman:1 represents:1 jones:1 nearly:2 future:1 t2:1 others:1 simplify:1 report:1 few:1 irreducible:2 employ:3 randomly:1 chib:1 individual:3 ima:1 statistician:1 attempt:1 interest:2 acceptance:3 normalisation:1 highly:2 possibility:2 evaluation:1 replicates:1 tj:2 chain:38 walk:2 inconvenient:1 stopped:1 instance:1 ctk:1 earlier:1 disadvantage:1 cost:5 subset:1 st:1 density:4 informatics:1 barbados:1 again:1 satisfied:1 choose:1 slowly:1 worse:1 american:1 derivative:5 return:1 potential:5 bfgs:20 summarized:1 coefficient:1 explicitly:1 ad:2 try:1 analyze:1 start:1 sort:2 maintains:2 bayes:1 variance:2 qk:2 efficiently:1 characteristic:2 ensemble:7 yield:1 outweighed:1 bayesian:11 basically:1 mc:29 carlo:17 multiplying:1 trajectory:1 history:3 explain:1 ed:2 checked:1 against:1 energy:5 initialised:1 pp:1 minka:2 riemannian:4 gain:1 dataset:3 popular:2 duplicated:1 knowledge:1 dimensionality:2 carefully:2 sophisticated:1 higher:7 follow:1 methodology:3 improved:2 arranged:1 strongly:1 furthermore:1 rejected:1 ergodicity:2 correlation:3 hand:2 hastings:7 reversible:1 defines:1 logistic:6 quality:1 effect:1 validity:4 true:4 iteratively:1 neal:3 adjacent:1 during:2 generalized:1 crf:4 qti:1 eca:7 recently:1 charles:1 common:2 physical:1 empirically:1 volume:1 association:1 accumulate:1 composition:2 gibbs:2 rd:3 tuning:3 fk:2 mathematics:1 similarly:1 pointed:1 language:1 base:4 curvature:2 posterior:2 recent:2 showed:1 fgs:37 inf:1 verlag:1 yi:6 seen:1 minimum:3 fortunately:1 preserving:1 greater:1 george:1 employed:1 converting:1 converge:3 full:4 mix:1 multiple:1 reduces:1 faster:1 adapt:3 minimising:1 qi:2 variant:1 regression:7 metric:1 iteration:4 kernel:20 faq:1 p0i:1 achieved:1 econom:1 proposal:12 background:2 want:2 separately:1 remarkably:1 else:1 singular:1 source:1 extra:1 comment:1 call:2 easy:1 independence:1 fit:1 brighter:1 inner:1 idea:5 barthelme:2 whether:2 url:1 suffer:1 hessian:26 york:1 matlab:2 dramatically:1 useful:2 generally:1 detailed:4 clear:1 amount:3 locally:1 generate:2 http:2 problematic:1 tutorial:2 trapped:1 rosenthal:1 per:5 discrete:1 write:1 key:2 d3:1 changing:2 pj:1 nocedal:1 fraction:1 sum:1 sti:3 inverse:2 run:3 uncertainty:1 almost:1 reasonable:2 draw:2 eee:1 scaling:1 bit:1 resampled:1 hi:1 display:1 fold:1 quadratic:1 adapted:1 precisely:1 x2:10 speed:2 argument:2 extremely:1 min:5 simulate:1 separable:2 poor:1 kd:1 across:3 slightly:1 rmhmc:15 appealing:1 metropolis:10 happens:1 intuitively:1 invariant:9 taken:1 heart:1 computationally:1 equation:1 previously:1 german:2 count:1 know:1 ascending:1 end:1 adopted:1 operation:2 original:2 denotes:2 running:1 include:1 ensure:2 newton:16 cape:1 especially:1 build:1 establish:1 murray:1 society:3 question:1 md:3 diagonal:1 september:1 gradient:4 subspace:1 thank:1 degrade:1 seven:1 manifold:4 reason:1 code:2 ratio:1 liang:1 difficult:1 unfortunately:1 hmc:31 robert:2 potentially:1 pima:1 trace:1 negative:1 design:1 implementation:1 markov:12 sm:1 datasets:7 discarded:1 finite:1 january:1 situation:1 defining:1 excluding:1 qtk:1 dpi:1 bk:7 pair:1 namely:1 csutton:1 established:1 hour:1 brook:1 address:2 suggested:1 dynamical:1 xm:1 max:2 memory:11 royal:3 metropolishastings:1 natural:2 recursion:3 scheme:2 naive:2 prior:1 literature:3 asymptotic:1 winkelmann:1 interesting:2 analogy:1 x01:4 principle:1 editor:1 storing:1 pi:5 summary:1 infeasible:5 taking:1 edinburgh:1 distributed:1 greenberg:1 dimension:4 valid:6 world:2 unaware:1 rich:1 computes:2 author:1 transition:1 adaptive:10 avoids:2 projected:1 employing:1 far:1 welling:1 approximate:6 yichuan:1 emphasize:1 keep:1 uai:1 handbook:1 xi:26 ripley:1 continuous:1 search:1 iterative:1 sk:26 table:6 bethe:1 excellent:1 constructing:2 did:1 pk:2 spread:1 main:1 aistats:1 rh:2 whole:3 x1:47 augmented:2 enlarged:1 fashion:1 momentum:5 position:2 acf:1 third:1 formula:4 xt:4 emphasized:1 incorporating:2 naively:1 intractable:1 workshop:1 drew:1 conditioned:1 easier:1 simply:2 likely:1 expressed:1 springer:1 corresponds:1 quasinewton:2 kinetic:1 ma:1 conditional:2 identity:3 invalid:2 careful:1 stk:6 replace:1 fisher:1 change:1 specifically:2 sampler:26 averaging:1 called:8 total:1 isomorphic:1 accepted:2 experimental:1 e:4 exception:1 cholesky:1 mark:1 szummer:1 incorporate:2 evaluate:2 mcmc:18 tested:1 correlated:5 |
3,827 | 4,465 | Online Submodular Set Cover,
Ranking, and Repeated Active Learning
Jeff Bilmes
Department of Electrical Engineering
University of Washington
[email protected]
Andrew Guillory
Department of Computer Science
University of Washington
[email protected]
Abstract
We propose an online prediction version of submodular set cover with connections
to ranking and repeated active learning. In each round, the learning algorithm
chooses a sequence of items. The algorithm then receives a monotone submodular function and suffers loss equal to the cover time of the function: the number of
items needed, when items are selected in order of the chosen sequence, to achieve
a coverage constraint. We develop an online learning algorithm whose loss converges to approximately that of the best sequence in hindsight. Our proposed
algorithm is readily extended to a setting where multiple functions are revealed at
each round and to bandit and contextual bandit settings.
1
Problem
In an online ranking problem, at each round we choose an ordered list of items and then incur some
loss. Problems with this structure include search result ranking, ranking news articles, and ranking
advertisements. In search result ranking, each round corresponds to a search query and the items
correspond to search results. We consider online ranking problems in which the loss incurred at
each round is the number of items in the list needed to achieve some goal. For example, in search
result ranking a reasonable loss is the number of results the user needs to view before they find the
complete information they need. We are specifically interested in problems where the list of items is
a sequence of questions to ask or tests to perform in order to learn. In this case the ranking problem
becomes a repeated active learning problem. For example, consider a medical diagnosis problem
where at each round we choose a sequence of medical tests to perform on a patient with an unknown
illness. The loss is the number of tests we need to perform in order to make a confident diagnosis.
We propose an approach to these problems using a new online version of submodular set cover.
A set function F (S) defined over a ground set V is called submodular if it satisfies the following
diminishing returns property: for every A ? B ? V \ {v}, F (A + v) ? F (A) ? F (B + v) ? F (B).
Many natural objectives measuring information, influence, and coverage turn out to be submodular
[1, 2, 3]. A set function is called monotone if for every A ? B, F (A) ? F (B) and normalized if
F (?) = 0. Submodular set cover is the problem of selecting an S ? V minimizing |S| under the
constraint that F (S) ? 1 where F is submodular, monotone, and normalized (note we can always
rescale F ). This problem is NP-hard, but a greedy algorithm gives a solution with cost less than
1 + ln 1/ that of the optimal solution where is the smallest non-zero gain of F [4].
We propose the following online prediction version of submodular set cover, which we simply call
online submodular set cover. At each time step t = 1 . . . T we choose a sequence of elements
S t = (v1t , v2t , . . . vnt ) where each vit is chosen from a ground set V of size n (we use a superscript
for rounds of the online problem and a subscript for other indices). After choosing S t , an adversary
reveals a submodular, monotone, normalized function F t , and we suffer loss `(F t , S t ) where
`(F t , S t ) , min {n} ? {i : F t (Sit ) ? 1}i
(1)
1
S
and Sit , j?i {vjt } is defined to be the set containing the first i elements of S t (let S0t , ?). Note
Pn
` can be equivalently written `(F t , S t ) , i=0 I(F t (Sit ) < 1) where I is the indicator function.
Intuitively, `(F t , S t ) corresponds to a bounded version of cover time: it is the number of items up to
n needed to achieve F t (S) ? 1 when we select items in the order specified by S t . Thus, if coverage
is not achieved, we suffer a loss of n. We assume that F t (V ) ? 1 (therefore coverage is achieved if
S t does not contain duplicates) and that the sequence of functions (F t )t is chosen in advance
(by an
P
oblivious adversary). The goal of our learning algorithm is to minimize the total loss t `(F t , S t ).
To make the problem clear, we present it first in its simplest, full information version. However, we
will later consider more complex variations including (1) a version where we only produce a list of
t
length k ? n instead of n, (2) a multiple objective version where a set of objectives F1t , F2t , . . . Fm
is revealed each round, (3) a bandit (partial information) version where we do not get full access to
F t and instead only observe F t (S1t ), F t (S2t ), . . . F t (Snt ), and (4) a contextual bandit version where
there is some context associated with each round.
We argue that online submodular set cover, as we have defined it, is an interesting and useful model
for ranking and repeated active learning problems. In a search result ranking problem, after presenting search results to a user we can obtain implicit feedback from this user (e.g., clicks, time spent
viewing each result) to determine which results were actually relevant. We can then construct an
objective F t (S) such that F t (S) ? 1 iff S covers or summarizes the relevant results. Alternatively,
we can avoid explicitly constructing an objective by considering the bandit version of the problem
where we only observe the values F t (Sit ). For example, if the user clicked on k total results then
we can let F (Sit ) , ci /k where ci ? k is the number of results in the subset Si which were clicked.
Note that the user may click an arbitrary set of results in an arbitrary order, and the user?s decision
whether or not to click a result may depend on previously viewed and clicked results. All that we
assume is that there is some unknown submodular function explaining the click counts. If the user
clicks on a small number of very early results, then coverage is achieved quickly and the ordering is
desirable. This coverage objective makes sense if we assume that the set of results the user clicked
are of roughly equal importance and together summarize the results of interest to the user.
In the medical diagnosis application, we can define F t (S) to be proportional to the number of
candidate diseases which are eliminated after performing the set of tests S on patient t. If we assume
that a particular test result always eliminates a fixed set of candidate diseases, then this function is
submodular. Specifically, this objective is the reduction in the size of the version space [5, 6]. Other
active learning problems can also be phrased in terms of satisfying a submodular coverage constraint
including problems that allow for noise [7]. Note that, as in the search result ranking problem, F t is
not initially known but can be inferred after we have chosen S t and suffered loss `(F t , S t ).
2
Background and Related Work
Recently, Azar and Gamzu [8] extended the O(ln 1/) greedy approximation algorithm for submodular set cover to the more general problem of minimizing the average cover time of a set of
objectives. Here is the smallest non-zero gain of all the objectives. Azar and Gamzu [8] call this
problem ranking with submodular valuations. More formally, we have a known set of functions
F1 , F2 , . . . , Fm each with an associated
weight wi . The goal is then to choose a permutation S of
Pm
the ground set V to minimize i=1 wi `(Fi , S). The offline approximation algorithm for ranking
with submodular valuations will be a crucial tool in our analysis of online submodular set cover.
In particular, this offline algorithm can viewed as constructing the best single permutation S for a
sequence of objectives F 1 , F 2 . . . F T in hindsight (i.e., after all the objectives are known). Recently
the ranking with submodular valuations problem was extended to metric costs [9].
Online learning is a well-studied problem [10]. In one standard setting, the online learning algorithm
has a collection of actions A, and at each time step t the algorithm picks an action S t ? A. The
learning algorithm then receives a loss function `t , and the algorithm incurs the loss value for the
action it chose `t (S t ). We assume `t (S t ) ? [0, 1] but make no other assumptions about the form
of loss. The performance of an online learning algorithm is often measured in terms of regret, the
difference between the loss incurred by the algorithm and the loss of the best single fixed action
PT
PT
in hindsight: R = p t=1 `t (S t ) ? minS?A t=1 `t (S). There are randomized algorithms which
guarantee E[R] ? T ln |A| for adversarial sequences of loss functions [11]. Note that because
2
E[R] = o(T ) the per round regret approaches zero. In the bandit version of this problem the learning
algorithm only observes `t (S t ) [12].
Our problem fits in this standard setting with A chosen to be the set of all ground set permutations
(v1 , v2 , . . . vn ) and `t (S t ) , `(F t , S t )/n. However, in this case A is very large so standard online
learning algorithms which keep weight vectors of size |A| cannot be directly applied. Furthermore,
our problem generalizes an NP-hard offline problem which has no polynomial time approximation
scheme, so it is not likely that we will be able to derive any efficient algorithm with o(T ln |A|)
regret. We therefore instead consider ?-regret, the loss incurred by the algorithm as compared to ?
PT
PT
times the best fixed prediction. R? = t=1 `t (S t ) ? ? minS?A t=1 `t (S). ?-regret is a standard
notion of regret for online versions of NP-hard problems. If we can show R? grows sub linearly
with T then we have shown loss converges to that of an offline approximation with ratio ?.
Streeter and Golovin [13] give online algorithms for the closely related problems of submodular
function maximization and min-sum submodular set cover. In online submodular function maximization, theP
learning algorithm selects a set S t with |S t | ? k before F t is revealed, and the goal is
to maximize t F t (S t ). This problem differs from ours in that our problem is a loss minimization
problem as opposed to an objective maximization problem. Online min-sum submodular set cover
is similar to online submodular set cover except the loss is not cover time but rather
? t, St) ,
`(F
n
X
max(1 ? F t (Sit ), 0).
(2)
i=0
Min-sum submodular set cover penalizes 1 ? F t (Sit ) where submodular set cover uses I(F t (Sit ) <
1). We claim that for certain applications the hard
P threshold makes more sense. For example, in
repeated active learning problems minimizing t `(F t , S t ) naturally corresponds to minimizing
P ? t t
the number of questions asked. Minimizing t `(F
, S ) does not have this interpretation as it
charges less for questions asked when F t is closer to 1. One might hope that minimizing ` could
? This is not likely to be the case, as the apbe reduced to or shown equivalent to minimizing `.
proximation algorithm of Streeter and Golovin [13] does not carry over to online submodular set
cover. P
Their online algorithm is based on approximating an offline algorithm which greedily maximizes t min(F t (S), 1). Azar and Gamzu [8] show that this offline algorithm, which they call the
cumulative greedy algorithm, does not achieve a good approximation ratio for average cover time.
Radlinski et al. [14] consider a special case of online submodular function maximization applied
to search result ranking. In their problem the objective function is assumed to be a binary valued
submodular function with 1 indicating the user clicked on at least one document. The goal is then
to maximize the number of queries which receive at least one click. For binary valued functions
`? and ` are the same, so in this setting minimizing the number of documents a user must view
before clicking on a result is a min-sum submodular set cover problem. Our results generalize this
problem to minimizing the number of documents a user must view before some possibly non-binary
submodular objective is met. With non-binary objectives we can incorporate richer implicit feedback
such as multiple clicks and time spent viewing results. Slivkins et al. [15] generalize the results of
Radlinski et al. [14] to a metric space bandit setting.
Our work differs from the online set cover problem of Alon et al. [16]; this problem is a single
set cover problem in which the items that need to be covered are revealed one at a time. Kakade
et al. [17] analyze general online optimization problems with linear loss. If we assume that the
functions F t are all taken from a known finite set of functions F then we have linear loss over a |F|
dimensional space. However, this approach gives poor dependence on |F|.
3
Offline Analysis
In this work we present an algorithm for online submodular set cover which extends the offline
algorithm of Azar and Gamzu [8] for the ranking with submodular valuations problem. Algorithm 1
shows this offline algorithm, called the adaptive residual updates algorithm. Here we use T to denote
the number of objective functions and superscript t to index the set of objectives. This notation is
chosen to make the connection to the proceeding online algorithm clear: our online algorithm will
approximately implement Algorithm 1 in an online setting, and in this case the set of objectives in
3
Algorithm 1 Offline Adaptive Residual
Input: Objectives F 1 , F 2 , . . . F T
Output: Sequence S1 ? S2 ? . . . Sn
S0 ? ?
for i ? 1 . . . n do
P
v ? argmax t ?(F t , Si?1 , v)
v?V
Si ? Si?1 + v
end for
Figure 1: Histograms used in offline analysis
the offline algorithm will be the sequence of objectives in the online problem. The algorithm is a
greedy algorithm similar to the standard algorithm for submodular set cover. The crucial difference
is that instead of a normal gain term of F t (S + v) ? F t (S) it uses a relative gain term
(
t
t
(S)
min( F (S+v)?F
, 1) if F (S) < 1
t
1?F t (S)
?(F , S, v) ,
0
otherwise
The intuition is that (1) a small gain for F t matters more if F t is close to being covered (F t (S) close
to 1) and (2) gains for F t with F t (S) ? 1 do not matter as these functions are already covered. The
main result of Azar and Gamzu [8] is that Algorithm 1 is approximately optimal.
P
t
Theorem 1 ([8]). The loss
t `(F , S) of the sequence produced by Algorithm 1 is within
4(ln(1/) + 2) of that of any other sequence.
We note Azar and Gamzu [8] allow for weights for each F t . We omit weights for simplicity. Also,
Azar and Gamzu [8] do not allow the sequence S to contain duplicates while we do?selecting a
ground set element twice has no benefit but allowing them will be convenient for the online analysis. The proof of Theorem 1 involves representing solutions to the submodular ranking problem as
histograms. Each histogram is defined such that the area of the histogram is equal to the loss of the
corresponding solution. The approximate optimality of Algorithm 1 is shown by proving that the
histogram for the solution it finds is approximately contained within the histogram for the optimal
solution. In order to convert Algorithm 1 into an online algorithm, we will need a stronger version of
Theorem 1. Specifically, we will need to show that when there is some additive error in the greedy
selection rule Algorithm 1 is still approximately optimal.
P
For the optimal solution S ? = argminS?V n t `(F t , S) (V n is the set of all length n sequences of
ground set elements), define a histogram h? with T columns, one for each function F t . Let the tth
column have with width 1 and height equal to `(F t , S ? ). Assume that the columns are ordered by
increasing cover time so that the histogram is monotone non-decreasing. Note that the area of this
histogram is exactly the loss of S ? .
For a sequence of sets ? = S0 ? S1 ? . . . Sn (e.g., those found by Algorithm 1) define a corresponding sequence of truncated objectives
(
t
t
i?1 )?F (Si?1 )
, 1) if F t (Si?1 ) < 1
min( F (S?S
t (S
t
1?F
)
?
i?1
Fi (S) ,
1
otherwise
F?it (S) is essentially F t except with (1) Si?1 given ?for free?, and (2) values rescaled to range
between 0 and 1. We note that F?it is submodular and that if F t (S) ? 1 then F?it (S) ? 1. In this
sense F?it is an easier objective than F t . Also, for any v, F?it ({v}) ? F?it (?) = ?(F t , Si?1 , v). In
other words, the gain of F?it at ? is the normalized gain of F t at Si?1 . This property will be crucial.
? 1, h
? 2, . . . h
? n which correspond to the loss of S ? for the
We next define truncated versions of h? : h
t
? i have T columns of height j
?
easier covering problems involving Fi . For each j ? 1 . . . n, let h
t
?
t
?
with the tth such column of width F?i (Sj ) ? F?i (Sj?1 ) (some of these columns may have 0 width).
? i.
Assume again the columns are ordered by height. Figure 1 shows h? and h
We assume without loss of generality that F t (Sn? ) ? 1 for every t (clearly some choice of S ?
contains no duplicates, so under our assumption that F t (V ) ? 1 we also have F t (Sn? ) ? 1). Note
4
? i is then the number of functions remaining to be covered after Si?1 is given
that the total width of h
? i is
for free (i.e., the number of F t with F t (Si?1 ) < 1). It is not hard to see that the total area of h
P ? ?t ?
?
t `(Fi , S ) where l is the loss function for min-sum submodular set cover (2). From this we know
? i has area less than h? . In fact, Azar and Gamzu [8] show the following.
h
? i is completely contained within h? when h
? i and h? are aligned along their lower
Lemma 1 ([8]). h
right boundaries.
We need
lemma before proving the main result of this section. For a sequence S define
P one final
t
Q
=
?(F
,
S
, vi ) to be the total normalized gain of the ith selected element and let ?i =
i
i?1
t
Pn
t
j=i Qj be the sum of the normalized gains from i to n. Define ?i = |{t : F (Si?1 ) < 1}| to be
the number of functions which are still uncovered before vi is selected (i.e., the loss incurred at step
i). [8] show the following result relating ?i to ?i .
Lemma 2 ([8]). For any i, ?i ? (ln 1/ + 2)?i
We now state and prove the main result of this section, that Algorithm 1 is approximately optimal
even when the ith greedy selection is preformed with some additive error Ri . This theorem shows
that in order to achieve low average cover time it suffices to approximately implement Algorithm 1.
Aside from being useful for converting Algorithm 1 into an online algorithm, this theorem may be
useful for applications in which the ground set V is very large. In these situations it may be possible
to approximate Algorithm 1 (e.g., through sampling). Streeter and Golovin [13] prove similar results
for submodular function maximization and min-sum submodular set cover. Our result is similar, but
the proof is non trivial. The loss function ` is highly non linear with respect to changes in F t (Sit ),
so it is conceivable that small additive errors in the greedy selection could have a large effect. The
analysis of Im and Nagarajan [9] involves a version of Algorithm 1 which is robust to a sort of
multplicative error in each stage of the greedy selection.
Theorem 2. Let S = (v1 , v2 , . . . vn ) be any sequence for which
X
X
?(F t , Si?1 , vi ) + Ri ? max
?(F t , Si?1 , v)
v?V
t
Then
P
t
t
t
`(F , S ) ? 4(ln 1/ + 2)
P
t
t
?
`(F , S ) + n
t
P
i
Ri
Proof. Let h be a histogram with a column for each ?i with ?iP
6= 0. Let ? = (ln 1/ + 2). Let the
ith column have width (Qi + Ri )/(2?) and height max(?i ? j Rj , 0)/(2(Qi + Ri )). Note that
?i 6= 0 iff Qi + Ri 6= 0 as if there are functions not yet covered then there is some set element with
non zero gain (and vice versa). The area of h is
P
X 1
max(?i ? j Rj , 0)
1 X
n X
(Qi + Ri )
?
`(F t , S) ?
Rj
2?
2(Qi + Ri )
4? t
4? j
i:?i 6=0
? i are aligned along their lower right boundaries. We show that if the ith
Assume h and every h
? i . Then, it follows from Lemma 1 that h
column of h has non-zero area then it is contained within h
?
is contained within h , completing the proof.
P
Consider the ith column in h. Assume this column has non zero area so ?i ? j Rj . This column
P
is at most (?i + j?i Rj )/(2?) away from the right hand boundary. To show that this column is in
? i it suffices to show that after selecting the first k = b(?i ? P Rj )/(2(Qi + Ri ))c items in S ? we
h
j
P
P
P
still have t (1 ? F?it (Sk? )) ? (?i + j?i Rj )/(2?) . The most that t F?it can increase through
the addition of one item is Qi + Ri . Therefore, using the submodularity of F?it ,
X
X
X
F?it (Sk? ) ?
F?it (?) ? k(Qi + Ri ) ? ?i /2 ?
Rj /2
t
Therefore
P
t (1
?
t
F?it (Sk? ))
?i /2 +
X
? ?i /2 +
j
P
P
F?it (?))
Rj /2 since t (1 ?
= ?i . Using Lemma 2
X
X
Rj /2 ? ?i /(2?) +
Rj /2 ? (?i +
Rj )/(2?)
j
j
j
5
j?i
Algorithm 2 Online Adaptive Residual
Input: Integer T
Initialize n online learning algorithms
E1 , E2 , . . . En with A = V
for t = 1 ? T do
?i ? 1 . . . n predict vit with Ei
S t ? (v1t , . . . vnt )
Receive F t , pay loss l(F t , S t )
t
For Ei , `t (v) ? (1 ? ?(F t , Si?1
, v))
end for
4
Figure 2: Ei selects the ith element in S t .
Online Analysis
We now show how to convert Algorithm 1 into an online algorithm. We use the same idea used by
Streeter and Golovin [13] and Radlinski et al. [14] for online submodular function maximization: we
run n copies of some low regret online learning algorithm, E1 , E2 , . . . En , each with action space
A = V . We use the ith copy Ei to select the ith item in each predicted sequence S t . In other
words, the predictions of Ei will be vi1 , vi2 , . . . viT . Figure 2 illustrates this. Our algorithm assigns
loss values to each Ei so that, assuming Ei has low regret, Ei approximately implements the ith
greedy selection in Algorithm 1. Algorithm 2 shows this approach. Note that under our assumption
that F 1 , F 2 , . . . F T is chosen by an oblivious adversary, the loss values for the ith copy of the online
algorithm are oblivious to the predictions of that run of the algorithm. Therefore we can use any
algorithm for learning against an oblivious adversary.
Theorem
? algorithm with expected regret
? 3. Assume we use as a subroutine an online prediction
E[R] ? T ln n. Algorithm 2 has expected ?-regret E[R? ] ? n2 T ln n for ? = 4(ln(1/) + 2)
Proof. Define a meta-action v?i for the sequence of actions chosen by Ei , v?i = (vi1 , vi2 , . . . viT ). We
can extend the domain of F t to allow for meta-actions F t (S ? {?
vi }) = F t (S ? {vit }). Let S? be
the sequence of meta actions S? = (v?1 , v?2 , . . . v?n ). Let Ri be the regret of Ei . Note that from the
definition of regret and our choice of loss values we have that
X
X
?(F t , S?i?1 , v) ?
?(F t , S?i?1 , v?i ) = Ri
max
v?V
t
t
Therefore, S? approximates the greedy solution in the sense required by Theorem 2. Theorem 2 did
not require that S be constructed V . From Theorem 2 we then have
X
X
X
X
? ??
`(F t , S t ) =
`(F t , S)
`(F t , S ? ) + n
Ri
t
The expected ?-regret is then E[n
t
P
i
Ri ] ? n
t
?
2
i
T ln n
We describe several variations and extensions of this analysis, some of which mirror those for related
work [13, 14, 15].
Avoiding Duplicate Items Since each run of the online prediction algorithm is independent, Algorithm 2 may select the same ground set element multiple times. This drawback is easy to fix. We
can simply select any arbitrary vi ?
/ Si?1 if Ei selects a vi ? Si?i . This modification does not affect
the regret guarantee as selecting a vi ? Si?1 will always result in a gain of zero (loss of 1).
Truncated Loss In some applications we only care about the first k items in the sequence S t . For
these applications it makes sense to consider a truncated version of l(F t , S t ) with parameter k
`k (F t , S t ) , min {k} ? {|Sit | : F t (Sit ) ? 1}
This is cover time computed up to the kth element in S t . The analysis for Theorem 2 also shows
P k t t
P
Pk
t
?
t ` (F , S ) ? 4(ln 1/ + 2)
t `(F , S ) + k
i=1 Ri . The corresponding regret bound is then
6
?
P k t t
2
k
of untruncated loss
P T lnt n. ?Note here we are bounding truncated loss t ` (F , S ) in terms
2
`(F
,
S
).
In
this
sense
this
bound
is
weaker.
However,
we
replace
n
with
k 2 which may be
t
much smaller. Algorithm 2 achieves this bound simultaneously for all k.
Multiple Objectives per Round Consider a variation of online submodular set cover in which int
stead of receiving
a single objective F t each round we receive a batch of objectives F1t , F2t , . . . Fm
Pm
t
t
and incur loss i=1 `(Fi , S ). In other words, each rounds corresponds to a ranking with submodular
problem. It is easy to extend Algorithm 2 to this?setting by using 1 ?
Pvaluations
m
t
, v) for the loss of action v in Ei . We then get O(k 2 mL? ln n+k 2 m ln n)
(1/m) i=1 ?(Fit , Si?1
PT Pm
?
total regret where L = t=1 i=1 `(Fit , S ? ) (Section 2.6 of [10]).
Bandit Setting Consider a setting where instead of receiving full access to F t we only observe
the sequence of objective function values F t (S1t ), F t (S2t ), . . . F t (Snt ) (or in the case of multiple
objectives per round, Fit (Sjt ) for every i and j). We can extend Algorithm 2 to this setting using a
nonstochastic multiarmed bandits algorithm [12]. We note duplicate removal becomes more subtle
in the bandit setting: should we feedback a gain of zero when a duplicate is selected or the gain of
the non-duplicate replacement? We propose either is valid if replacements are chosen obliviously.
Bandit Setting with Expert Advice We can further generalize the bandit setting to the contextual
bandit setting [18] (e.g., the bandit setting with expert advice [12]). Say that we have access at time
step t to predictions from a set of m experts. Let v?j be the meta action corresponding to the sequence
of predictions from the jth expert and V? be the set of all v?j . Assume that Ei guarantees low regret
with respect to V?
X
X
t
t
?(F t , Si?1
, vit ) + Ri ? max
?(F t , Si?1
, v?)
(3)
v
??V?
t
t
where we have extended the domain of each F t to include meta actions as in the proof
P of Theorem
3. Additionally assume that F t (V? ) ? 1 for every t. In this case we can show t `k (F t , S t ) ?
?
P
Pk
minS ? ?V? m t `m (F t , S ? ) + k i=1 Ri . The Exp4 algorithm [12] has Ri = O( nT ln m) giving
?
total regret O(k 2 nT ln m). Experts may use context in forming recommendations. For example,
in a search ranking problem the context could be the query.
5
5.1
Experimental Results
Synthetic Example
We present a synthetic example for which the online cumulative greedy algorithm [13] fails, based
on the example in Azar and Gamzu [8] for the offline setting. Consider an online ad placement
problem where the ground set V is a set of available ad placement actions (e.g., a v ? V could
correspond to placing an ad on a particular web page for a particular length of time). On round
t, we receive an ad from an advertiser, and our goal is to acquire ? clicks for the ad using as few
advertising actions as possible. Define F t (Sit ) to be min(cti , ?)/? where cti is number of clicks
acquired from the ad placement actions Sit .
Say that we have n advertising actions of two types: 2 broad actions and n ? 2 narrow actions. Say
that the ads we receive are also of two types. Common type ads occur with probability (n ? 1)/n
and receive 1 and ? ? 1 clicks respectively from the two broad actions and 0 clicks from narrow
actions. Uncommon type ads occur with probability 1/n and receive ? clicks from one randomly
chosen narrow action and 0 clicks from all other actions. Assume ? ? n2 . Intuitively broad actions
could correspond to ad placements on sites for which many ads are relevant. The optimal strategy
giving an average cover time O(1) is to first select the two broad actions covering all common ads
then select the narrow actions in any order. However, the offline cumulative greedy algorithm will
pick all narrow actions before picking the broad action with gain 1 giving average cover time O(n).
The left of Figure 3 shows average cover time for our proposed algorithm and the cumulative greedy
algorithm of [13] on the same sequences of random objectives. For this example we use n = 25
and the bandit version of the problem with the Exp3 algorithm [12]. We also plot the average cover
times for offline solutions as baselines. As seen in the figure, the cumulative algorithms converge to
higher average cover times than the adaptive residual algorithms. Interestingly, the online cumulative
algorithm does better than the offline cumulative algorithm: it seems added randomization helps.
7
Figure 3: Average cover time
5.2
Repeated Active Learning for Movie Recommendation
Consider a movie recommendation website which asks users a sequence of questions before they are
given recommendations. We define an online submodular set cover problem for choosing sequences
of questions in order to quickly eliminate a large number of movies from consideration. This is
similar conceptually to the diagnosis problem discussed in the introduction. Define the ground set
V to be a set of questions (for example ?Do you want to watch something released in the past
10 years?? or ?Do you want to watch something from the Drama genre??). Define F t (S) to be
proportional to the number of movies eliminated from consideration after asking the tth user S.
Specifically, let H be the set of all movies in our database and V t (S) be the subset of movies
consistent with the tth user?s responses to S. Define F t (S) , min(|H \ V t (S)|/c, 1) where c is a
constant. F t (S) ? iff after asking the set of questions S we have eliminated at least c movies.
We set H to be a set of 11634 movies available on Netflix?s Watch Instantly service and use 803
questions based on those we used for an offline problem [7]. To simulate user responses to questions,
on round t we randomly select a movie from H and assume the tth user answers questions consistently with this movie. We set c = |H| ? 500 so the goal is to eliminate about 95% of all movies.
We evaluate in the full information setting: this makes sense if we assume we receive as feedback
the movie the user actually selected. As our online prediction subroutine we tried Normal-Hedge
[19], a second order multiplicative weights method [20], and a version of multiplicative weights for
small gains using the doubling trick (Section 2.6 of [10]). We also tried a heuristic modification of
Normal-Hedge which fixes ct = 1 for a fixed, more aggressive learning rate than theoretically justified. The right of Figure 3 shows average cover time for 100 runs of T = 10000 iterations. Note the
different scale in the bottom row?these methods performed significantly worse than Normal-Hedge.
The online cumulative greedy algorithm converges to a average cover time close to but slightly
worse than that of the adaptive greedy method. The differences are more dramatic for prediction
subroutines that converge slowly. The modified Normal-Hedge has no theoretical justification, so it
may not generalize to other problems. For the modified Normal-Hedge the final average cover times
are 7.72 adaptive and 8.22 cumulative. The offline values are 6.78 and 7.15.
6
Open Problems
It is not yet clear what practical value our proposed approach will have for web search result ranking.
A drawback to our approach is that we pick a fixed order in which to ask questions. For some
problems it makes more sense to consider adaptive strategies [5, 6].
Acknowledgments
This material is based upon work supported in part by the National Science Foundation under grant
IIS-0535100, by an Intel research award, a Microsoft research award, and a Google research award.
8
References
[1] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In HLT,
2011.
? Tardos. Maximizing the spread of influence through a social
[2] D. Kempe, J. Kleinberg, and E.
network. In KDD, 2003.
[3] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in Gaussian processes:
Theory, efficient algorithms and empirical studies. JMLR, 2008.
[4] L.A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem.
Combinatorica, 2(4), 1982.
[5] D. Golovin and A. Krause. Adaptive submodularity: A new approach to active learning and
stochastic optimization. In COLT, 2010.
[6] Andrew Guillory and Jeff Bilmes. Interactive submodular set cover. In ICML, 2010.
[7] Andrew Guillory and Jeff Bilmes. Simultaneous learning and covering with adversarial noise.
In ICML, 2011.
[8] Yossi Azar and Iftah Gamzu. Ranking with Submodular Valuations. In SODA, 2011.
[9] S. Im and V. Nagarajan. Minimum Latency Submodular Cover in Metrics. ArXiv e-prints,
October 2011.
[10] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[11] Y. Freund and R. Schapire. A desicion-theoretic generalization of on-line learning and an
application to boosting. In Computational learning theory, pages 23?37, 1995.
[12] P. Auer, N. Cesa-Bianchi, Y. Freund, and R.E. Schapire. The nonstochastic multiarmed bandit
problem. SIAM Journal on Computing, 32(1):48?77, 2003.
[13] M. Streeter and D. Golovin. An online algorithm for maximizing submodular functions. In
NIPS, 2008.
[14] F. Radlinski, R. Kleinberg, and T. Joachims. Learning diverse rankings with multi-armed
bandits. In ICML, 2008.
[15] A. Slivkins, F. Radlinski, and S. Gollapudi. Learning optimally diverse rankings over large
document collections. In ICML, 2010.
[16] N. Alon, B. Awerbuch, and Y. Azar. The online set cover problem. In STOC, 2003.
[17] Sham M. Kakade, Adam Tauman Kalai, and Katrina Ligett. Playing games with approximation
algorithms. In STOC, 2007.
[18] J. Langford and T. Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In
NIPS, 2007.
[19] K. Chaudhuri, Y. Freund, and D. Hsu. A parameter-free hedging algorithm. In NIPS, 2009.
[20] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction
with expert advice. Machine Learning, 2007.
9
| 4465 |@word version:19 polynomial:1 stronger:1 seems:1 vi1:2 open:1 tried:2 pick:3 asks:1 dramatic:1 incurs:1 carry:1 reduction:1 contains:1 uncovered:1 selecting:4 ours:1 document:5 interestingly:1 past:1 contextual:4 nt:2 si:21 yet:2 written:1 readily:1 must:2 additive:3 kdd:1 plot:1 ligett:1 update:1 aside:1 greedy:17 selected:5 website:1 item:15 ith:10 boosting:1 zhang:1 height:4 along:2 constructed:1 s2t:2 prove:2 theoretically:1 acquired:1 expected:3 roughly:1 multi:2 v1t:2 decreasing:1 armed:2 considering:1 increasing:1 becomes:2 clicked:5 bounded:1 notation:1 maximizes:1 what:1 hindsight:3 guarantee:3 every:6 charge:1 interactive:1 exactly:1 medical:3 omit:1 grant:1 before:8 service:1 engineering:1 v2t:1 subscript:1 approximately:8 lugosi:1 might:1 chose:1 twice:1 f1t:2 studied:1 range:1 practical:1 acknowledgment:1 drama:1 regret:18 implement:3 differs:2 area:7 empirical:1 significantly:1 convenient:1 word:3 get:2 cannot:1 close:3 selection:5 context:3 influence:2 equivalent:1 maximizing:2 vit:6 simplicity:1 assigns:1 rule:1 proving:2 notion:1 variation:3 justification:1 tardos:1 pt:5 user:18 us:2 trick:1 element:9 satisfying:1 database:1 bottom:1 electrical:1 snt:2 news:1 ordering:1 rescaled:1 observes:1 disease:2 intuition:1 asked:2 depend:1 singh:1 incur:2 upon:1 f2:1 completely:1 genre:1 describe:1 query:3 choosing:2 whose:1 richer:1 heuristic:1 valued:2 say:3 katrina:1 otherwise:2 stead:1 superscript:2 online:51 final:2 ip:1 sequence:28 propose:4 preformed:1 relevant:3 aligned:2 iff:3 chaudhuri:1 achieve:5 gollapudi:1 produce:1 adam:1 converges:3 spent:2 derive:1 andrew:3 develop:1 alon:2 help:1 measured:1 rescale:1 coverage:7 c:1 involves:2 predicted:1 met:1 submodularity:2 closely:1 drawback:2 stochastic:1 viewing:2 material:1 require:1 nagarajan:2 f1:1 suffices:2 fix:2 generalization:1 randomization:1 im:2 extension:1 obliviously:1 ground:10 normal:6 predict:1 claim:1 achieves:1 early:1 smallest:2 released:1 vice:1 tool:1 minimization:1 hope:1 clearly:1 sensor:1 always:3 gaussian:1 modified:2 rather:1 kalai:1 pn:2 avoid:1 joachim:1 consistently:1 adversarial:2 greedily:1 baseline:1 sense:8 eliminate:2 diminishing:1 initially:1 bandit:18 subroutine:3 interested:1 selects:3 colt:1 special:1 initialize:1 s1t:2 kempe:1 equal:4 construct:1 washington:4 eliminated:3 sampling:1 placing:1 broad:5 icml:4 np:3 duplicate:7 oblivious:4 few:1 randomly:2 simultaneously:1 national:1 argmax:1 replacement:2 microsoft:1 interest:1 highly:1 uncommon:1 closer:1 partial:1 stoltz:1 penalizes:1 theoretical:1 column:14 asking:2 cover:45 measuring:1 maximization:6 cost:2 subset:2 optimally:1 answer:1 guillory:4 synthetic:2 chooses:1 confident:1 st:1 randomized:1 siam:1 receiving:2 picking:1 together:1 quickly:2 again:1 cesa:3 containing:1 choose:4 opposed:1 possibly:1 slowly:1 worse:2 expert:6 return:1 aggressive:1 vnt:2 matter:2 int:1 explicitly:1 ranking:25 vi:7 ad:12 hedging:1 later:1 view:3 multiplicative:2 performed:1 analyze:1 netflix:1 sort:1 minimize:2 correspond:4 conceptually:1 generalize:4 produced:1 bilmes:5 advertising:2 simultaneous:1 suffers:1 hlt:1 definition:1 against:1 lnt:1 e2:2 naturally:1 associated:2 proof:6 gain:16 hsu:1 ask:2 subtle:1 actually:2 auer:1 higher:1 response:2 improved:1 generality:1 furthermore:1 implicit:2 stage:1 langford:1 hand:1 receives:2 web:2 ei:13 google:1 grows:1 effect:1 contain:2 normalized:6 awerbuch:1 round:16 game:2 width:5 covering:4 presenting:1 complete:1 theoretic:1 consideration:2 recently:2 fi:5 common:2 extend:3 interpretation:1 illness:1 relating:1 approximates:1 discussed:1 multiarmed:2 versa:1 cambridge:1 pm:3 submodular:50 access:3 something:2 certain:1 meta:5 binary:4 sjt:1 seen:1 guestrin:1 minimum:1 care:1 converting:1 determine:1 maximize:2 advertiser:1 converge:2 ii:1 multiple:6 full:4 desirable:1 rj:12 sham:1 exp3:1 lin:1 e1:2 award:3 qi:8 prediction:13 involving:1 patient:2 metric:3 essentially:1 arxiv:1 histogram:10 iteration:1 achieved:3 receive:8 background:1 addition:1 want:2 justified:1 krause:2 desicion:1 suffered:1 crucial:3 eliminates:1 call:3 integer:1 ee:1 near:1 revealed:4 easy:2 affect:1 fit:4 nonstochastic:2 fm:3 click:13 idea:1 qj:1 whether:1 suffer:2 action:27 useful:3 latency:1 clear:3 covered:5 simplest:1 reduced:1 tth:5 schapire:2 per:3 instantly:1 diagnosis:4 diverse:2 threshold:1 v1:2 monotone:5 sum:7 convert:2 year:1 run:4 you:2 soda:1 extends:1 reasonable:1 vn:2 decision:1 summarizes:1 bound:4 ct:1 completing:1 pay:1 occur:2 placement:5 constraint:3 ri:19 phrased:1 kleinberg:2 simulate:1 min:16 optimality:1 performing:1 department:2 poor:1 smaller:1 slightly:1 wi:2 kakade:2 modification:2 s1:2 untruncated:1 intuitively:2 taken:1 ln:17 vjt:1 previously:1 turn:1 count:1 needed:3 know:1 yossi:1 end:2 generalizes:1 available:2 f2t:2 observe:3 v2:2 away:1 batch:1 remaining:1 include:2 giving:3 approximating:1 objective:28 question:11 already:1 added:1 print:1 strategy:2 dependence:1 conceivable:1 kth:1 argue:1 valuation:5 trivial:1 argmins:1 assuming:1 length:3 index:2 ratio:2 minimizing:9 acquire:1 equivalently:1 october:1 stoc:2 summarization:1 unknown:2 perform:3 allowing:1 bianchi:3 finite:1 truncated:5 situation:1 extended:4 mansour:1 arbitrary:3 inferred:1 required:1 specified:1 connection:2 slivkins:2 narrow:5 nip:3 able:1 adversary:4 summarize:1 including:2 max:6 vi2:2 exp4:1 natural:1 indicator:1 residual:4 representing:1 scheme:1 movie:12 sn:4 epoch:1 removal:1 relative:1 freund:3 loss:40 permutation:3 interesting:1 wolsey:1 proportional:2 foundation:1 incurred:4 consistent:1 s0:2 article:1 playing:1 row:1 supported:1 free:3 copy:3 jth:1 offline:18 allow:4 weaker:1 explaining:1 tauman:1 benefit:1 feedback:4 boundary:3 valid:1 cumulative:9 collection:2 adaptive:8 social:1 sj:2 approximate:2 keep:1 ml:1 active:8 reveals:1 proximation:1 assumed:1 thep:1 alternatively:1 search:11 streeter:5 sk:3 additionally:1 learn:1 robust:1 golovin:6 complex:1 constructing:2 domain:2 did:1 pk:2 main:3 spread:1 linearly:1 azar:11 noise:2 s2:1 bounding:1 n2:2 repeated:6 advice:3 site:1 intel:1 en:2 gamzu:10 s0t:1 sub:1 fails:1 candidate:2 clicking:1 jmlr:1 advertisement:1 theorem:12 list:4 sit:13 importance:1 ci:2 mirror:1 illustrates:1 easier:2 simply:2 likely:2 forming:1 ordered:3 contained:4 doubling:1 watch:3 recommendation:4 corresponds:4 satisfies:1 hedge:5 cti:2 goal:7 viewed:2 jeff:3 replace:1 hard:5 change:1 specifically:4 except:2 lemma:5 called:3 total:7 experimental:1 indicating:1 select:7 formally:1 combinatorica:1 radlinski:5 incorporate:1 evaluate:1 avoiding:1 |
3,828 | 4,466 | How Do Humans Teach:
On Curriculum Learning and Teaching Dimension
Faisal Khan, Xiaojin Zhu, Bilge Mutlu
Department of Computer Sciences, University of Wisconsin?Madison
Madison, WI, 53706 USA. {faisal, jerryzhu, bilge}@cs.wisc.edu
Abstract
We study the empirical strategies that humans follow as they teach a target concept
with a simple 1D threshold to a robot.1 Previous studies of computational teaching, particularly the teaching dimension model and the curriculum learning principle, offer contradictory predictions on what optimal strategy the teacher should
follow in this teaching task. We show through behavioral studies that humans employ three distinct teaching strategies, one of which is consistent with the curriculum learning principle, and propose a novel theoretical framework as a potential
explanation for this strategy. This framework, which assumes a teaching goal of
minimizing the learner?s expected generalization error at each iteration, extends
the standard teaching dimension model and offers a theoretical justification for
curriculum learning.
1
Introduction
With machine learning comes the question of how to effectively teach. Computational teaching
has been well studied in the machine learning community [9, 12, 10, 1, 2, 11, 13, 18, 4, 14, 15].
However, whether these models can predict how humans teach is less clear. The latter question is
important not only for such areas as education and cognitive psychology but also for applications of
machine learning, as learning agents such as robots become commonplace and learn from humans.
A better understanding of the teaching strategies that humans follow might inspire the development
of new machine learning models and the design of learning agents that more naturally accommodate
these strategies.
Studies of computational teaching have followed two prominent threads. The first thread, developed by the computational learning theory community, is exemplified by the ?teaching dimension?
model [9] and its extensions [12, 10, 1, 2, 11, 13, 18]. The second thread, motivated partly by observations in psychology [16], is exemplified by the ?curriculum learning? principle [4, 14, 15]. We
will discuss these two threads in the next section. However, they make conflicting predictions on
what optimal strategy a teacher should follow in a simple teaching task. This conflict serves as an
opportunity to compare these predictions to human teaching strategies in the same task.
This paper makes two main contributions: (i) it enriches our empirical understanding of human
teaching and (ii) it offers a theoretical explanation for a particular teaching strategy humans follow.
Our approach combines cognitive psychology and machine learning. We first conduct a behavioral
study with human participants in which participants teach a robot, following teaching strategies
of their choice. This approach differs from most previous studies of computational teaching in
machine learning and psychology that involve a predetermined teaching strategy and that focus on
the behavior of the learner rather than the teacher. We then compare the observed human teaching
strategies to those predicted by the teaching dimension model and the curriculum learning principle.
1
Our data is available at http://pages.cs.wisc.edu/?jerryzhu/pub/humanteaching.tgz.
1
Figure 1: The target concept hj .
Empirical results indicate that human teachers follow the curriculum learning principle, while no
evidence of the teaching dimension model is observed. Finally, we provide a novel theoretical
analysis that extends recent ideas in teaching dimension model [13, 3] and offers curriculum learning
a rigorous underpinning.
2
Competing Models of Teaching
We first review the classic teaching dimension model [9, 1]. Let X be an input space, Y the label
space, and (x1 , y1 ), . . . , (xn , yn ) ? X ? Y a set of instances. We focus on binary classification in
the unit interval: X = [0, 1], Y = {0, 1}. We call H ? 2{x1 ,...,xn } a concept class and h ? H a
concept. A concept h is consistent with instance (x, y) iff x ? h ? y = 1. h is consistent with a set
of instances if it is consistent with every instance in the set. A set of instances is called a teaching
set of a concept h with respect to H, if h is the only concept in H that is consistent with the set. The
teaching dimension of h with respect to H is the minimum size of its teaching set. The teaching
dimension of H is the maximum teaching dimension of its concepts.
Consider the task in Figure 1, which we will use throughout the paper. Let x1 ? . . . ? xn . Let H be
all threshold labelings: H = {h | ?? ? [0, 1], ?i = 1 . . . n : xi ? h ? xi ? ?}. The target concept
hj has the threshold between xj and xj+1 : hj = {xj+1 , . . . , xn }. Then, the teaching dimension
of most hj is 2, as one needs the minimum teaching set {(xj , 0), (xj+1 , 1)}; for the special cases
h0 = {x1 , . . . , xn } and hn = ? the teaching dimension is 1 with the teaching set {(x1 , 1)} and
{(xn , 0)}, respectively. The teaching dimension of H is 2. For our purpose, the most important
argument is the following: The teaching strategy for most hj ?s suggested by teaching dimension is
to show two instances {(xj , 0), (xj+1 , 1)} closest to the decision boundary. Intuitively, these are the
instances most confusable by the learner.
Alternatively, curriculum learning suggests an easy-to-hard (or clear-to-ambiguous) teaching strategy [4]. For the target concept in Figure 1, ?easy? instances are those farthest from the decision boundary in each class, while ?hard? ones are the closest to the boundary. One such
teaching strategy is to present instances from alternating classes, e.g., in the following order:
(x1 , 0), (xn , 1), (x2 , 0), (xn?1 , 1), . . . , (xj , 0), (xj+1 , 1). Such a strategy has been used for secondlanguage teaching in humans. For example, to train Japanese listeners on the English [r]-[l] distinction, McCandliss et al. linearly interpolated a vocal tract model to create a 1D continuum similar
to Figure 1 along [r] and [l] sounds. They showed that participants were better able to distinguish
the two phonemes if they were given easy (over-articulated) training instances first [16]. Computationally, curriculum learning has been justified as a heuristic related to continuation method in
optimization to avoid poor local optima [4].
Hence, for the task in Figure 1, we have two sharply contrasting teaching strategies at hand: the
boundary strategy starts near the decision boundary, while the extreme strategy starts with extreme instances and gradually approaches the decision boundary from both sides. Our goal in this
paper is to compare human teaching strategies with these two predictions to shed more light on
models of teaching. While the teaching task used in our exploration is simple, as most real-world
teaching situations do not involve a threshold in a 1D space, we believe that it is important to lay the
foundation in a tractable task before studying more complex tasks.
3
A Human Teaching Behavioral Study
Under IRB approval, we conducted a behavioral study with human participants to explore human
teaching behaviors in a task similar to that illustrated in Figure 1. In our study, participants teach
the target concept of ?graspability??whether an object can be grasped and picked up with one
hand?to a robot. We chose graspability because it corresponds nicely to a 1D space empirically
2
1
0.9
0.8
0.7
1
|V |
0.6
0.5
0.4
0.3
0.2
0.1
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
iteration t
(a)
(b)
Figure 2: (a) A participant performing the card sorting/labeling and teaching tasks. (b) Human
teaching sequences that follow the extreme strategy gradually shrink the version space V1 .
studied before [17]. We chose to use a robot learner because it offers great control and consistency
while facilitating natural interaction and teaching. The robot keeps its behavior consistent across
conditions and trials, therefore, providing us with the ability to isolate various interactional factors.
This level of experimental control is hard to achieve with a human learner. The robot also affords
embodied behavioral cues that facilitate natural interaction and teaching strategies that computers
do not afford.
Participants were 31 paid subjects recruited from the University of Wisconsin?Madison campus.
All were native English speakers with an average age of 21 years.
Materials. We used black-and-white photos of n = 31 objects chosen from the norming study
of Salmon et al. [17]. The photos were of common objects (e.g., food, furniture, animals) whose
average subjective graspability ratings evenly span the whole range. We printed each photo on a 2.5by-4.5 inch card. The robot was a Wakamaru humanlike robot manufactured by Mitsubishi Heavy
Industries, Ltd. It neither learned nor responded to teaching. Instead, it was programmed to follow
motion in the room with its gaze. Though seemingly senseless, this behavior in fact provides a
consistent experience to the participants without extraneous factors to bias them. It also corresponds
to the no-feedback assumption in most teaching models [3]. Participants were not informed that the
robot was not actually learning.
Procedure. Each participant completed the experiment alone. The experiment involved two subtasks that were further broken down into multiple steps. In the first subtask, participants sorted the
objects based on their subjective ratings of their graspability following the steps below.
In step 1, participants were instructed to place each object along a ruler provided on a long table
as seen in Figure 2(a). To provide baselines on the two ends of the graspability spectrum, we fixed
a highly graspable object (a toothbrush) and a highly non-graspable object (a building) on the two
ends of the ruler. We captured the image of the table and later converted the position of each card
into a participant-specific, continuous graspability rating x1 , . . . , xn ? [0, 1]. For our purpose, there
is no need to enforce inter-participant agreement.
In step 2, participants assigned a binary ?graspable? (y = 1) or ?not graspable? (y = 0) label to each
object by writing the label on the back of the corresponding card. This gave us labels y1 , . . . , yn .
The sorted cards and the decision boundary from one of the participants is illustrated in Figure 3.
In step 3, we asked participants to leave the room for a short duration so that ?the robot could
examine the sorted cards on the table without looking at the labels provided at the back,? creating
the impression that the learner will associate the cards with the corresponding values x1 , . . . , xn .
In the second subtask, participants taught the robot the (binary) concept of graspability using the
cards. In this task, participants picked up a card from the table, turned toward the robot, and held
the card up while providing a verbal description of the object?s graspability (i.e., the binary label
y) as seen in Figure 2(a). The two cards, ?toothbrush? and ?building,? were fixed to the table and
not available for teaching. The participants were randomly assigned into two conditions: (1) natural
and (2) constrained. In the ?natural? condition, participants were allowed to use natural language to
describe the graspability of the objects, while those in the ?constrained? condition were only allowed
3
to say either ?graspable? or ?not graspable.? They were instructed to use as few cards as they felt
necessary. There was no time limit on either subtasks.
Results. The teaching sequences from all participants are presented in Figure 4. The title of each
plot contains the participant ID and condition. The participant?s rating and classification of all
objects are presented above the x-axis. Objects labeled as ?not graspable? are indicated with blue
circles and those labeled as ?graspable? are marked with red plus signs. The x-axis position of the
object represents its rating x ? [0, 1]. The vertical blue and red lines denote an ?ambiguous region?
around the decision boundary; objects to the left of the blue line have the label ?not graspable;?
those to the right of the red line are labeled as ?graspable,? and objects between these lines could
have labels in mixed order. In theory, following the boundary strategy, the teacher should start with
teaching instances on these two lines as suggested by the teaching dimension model. The y-axis is
trial t = 1, . . . , 15, which progresses upwards. The black line and dots represent the participant?s
teaching sequence. For example, participant P01 started teaching at t = 1 with an object she rated
as x = 1 and labeled as ?graspable;? at t = 2, she chose an example with rating x = 0 and label
?not graspable;? and so on. The average teaching sequence had approximately 8 examples, while
the longest teaching sequence had a length of 15 examples.
We observed three major human teaching strategies in our data: (1) the extreme strategy, which
starts with objects with extreme ratings and gradually moves toward the decision boundary; (2)
the linear strategy, which follows a prominent left-to-right or right-to-left sequence; and (3) the
positive-only strategy, which involves only positively labeled examples. We categorized most
teaching sequences into these three strategies following a simple heuristic. First, sequences that
involved only positive examples were assigned to the positive-only strategy. Then, we assigned
the sequences whose first two teaching examples had different labels to the extreme strategy and
the others to the linear strategy. While this simplistic approach does not guarantee perfect classification (e.g., P30 can be labeled differently), it minimizes hand-tuning and reduces the risk of
overfitting. We made two exceptions, manually assigning P14 and P16 to the extreme strategy.
Nonetheless, these few potential misclassifications do not change our conclusions below.
None of the sequences followed the boundary strategy. In fact, among all 31 participants, 20 started
teaching with the most graspable object (according to their own rating), 6 with the least graspable,
none in or around the ambiguous region (as boundary strategy would predict), and 5 with some
other objects. In brief, people showed a tendency to start teaching with extreme objects, especially
the most graspable ones. During post-interview, when asked why they did not start with objects
around their decision boundary, most participants mentioned that they wanted to start with clear
examples of graspability.
For participants who followed the extreme strategy, we are interested in whether their teaching
sequences approach the decision boundary as curriculum learning predicts. Specifically, at any
time t, let the partial teaching sequence be (x1 , y1 ), . . . , (xt , yt ). The aforementioned ambiguous
region with respect to this partial sequence is the interval between the inner-most pair of teaching
examples with different labels. This can be written as V1 ? [maxj:yj =0 xj , minj:yj =1 xj ] where j is
over 1 . . . t. V1 is exactly the version space of consistent threshold hypotheses (the subscript 1 will
become clear in the next section). Figure 2(b) shows a box plot of the size of V1 for all participants
as a function of t. The red lines mark the median and the blue boxes indicate the 1st & 3rd quartiles.
As expected, the size of the version space decreases.
Figure 3: Sorted cards and the decision boundary from one of the participants.
4
The extreme strategy
P03, natural
P01, natural
P13, natural
P15, natural
P25, natural
P06, constrained
P31, natural
15
15
10
10
10
10
10
10
10
0
0
0.5
x
0
0
1
5
0.5
x
0
0
1
P12, constrained
P10, constrained
5
0.5
x
5
0
0
1
P14, constrained
t
t
5
5
0.5
x
P16, constrained
5
5
0
0
1
t
15
t
15
t
15
t
15
t
15
0.5
x
0
0
1
P18, constrained
0.5
x
0
0
1
P20, constrained
15
15
10
10
10
0
0
0.5
x
0
0
1
5
0.5
x
0
0
1
5
0.5
x
5
0
0
1
t
t
5
0.5
x
5
0
0
1
0.5
x
5
0
0
1
1
t
15
10
t
15
10
t
15
10
t
15
10
t
15
5
0.5
x
P22, constrained
0.5
x
0
0
1
0.5
x
1
The linear strategy
P05, natural
P07, natural
P09, natural
P11, natural
P17, natural
P23, natural
P19, natural
15
10
10
10
10
5
0
0
5
0.5
x
0
0
1
5
0.5
x
0
0
1
P04, constrained
P02, constrained
5
0.5
x
P08, constrained
t
5
0
0
1
0.5
x
0
0
1
t
15
10
t
15
10
t
15
10
t
15
t
15
t
15
P24, constrained
5
5
0.5
x
0
0
1
P26, constrained
0.5
x
0
0
1
P28, constrained
10
10
10
10
10
10
10
t
5
0
0
0.5
x
1
0
0
5
0.5
x
0
0
1
5
0.5
x
5
0
0
1
0.5
x
0
0
1
5
0.5
x
1
1
t
15
t
15
t
15
t
15
t
15
t
15
5
0.5
x
P30, constrained
15
0
0
5
0.5
x
1
0
0
0.5
x
1
The positive-only strategy
P21, natural
P27, natural
P29, natural
15
10
10
10
t
5
0
0
t
15
t
15
5
0.5
x
1
5
0
0
0.5
x
1
0
0
0.5
x
1
Figure 4: Teaching sequences of all participants.
Finally, the positive-only strategy was observed significantly more in the ?natural? condition
(3/16 ? 19%) than in the ?constrained? condition (0/15 = 0%), ?2 (1, N = 31) = 4.27, p = .04.
We observed that these participants elaborated in English to the robot why they thought that their
objects were graspable. We speculate that they might have felt that they had successfully described
the rules and that there was no need to use negative examples. In contrast, the constrained condition
did not have the rich expressivity of natural language, necessitating the use of negative examples.
4
A Theoretical Account of the ?Extreme? Teaching Strategy
We build on our empirical results and offer a theoretical analysis as a possible rationalization for the
extreme strategy. Research in cognitive psychology has consistently shown that humans represent
everyday objects with a large number of features (e.g., [7, 8]). We posit that although our teaching
task was designed to mimic the one-dimensional task illustrated in Figure 1 (e.g., the linear layout
of the cards in Figure 3), our teachers might still have believed (perhaps subconsciously) that the
robot learner, like humans, associates each teaching object with multiple feature dimensions.
Under the high-dimensional assumption, we show that the extreme strategy is an outcome of minimizing per-iteration expected error of the learner. Note that the classic teaching dimension model [9]
fails to predict the extreme strategy even under this assumption. Our analysis is inspired by recent
advances in teaching dimension, which assume that teaching progresses in iterations and learning
is to be maximized after each iteration [13, 3]. Different from those analysis, we minimize the
expected error instead of the worst-case error and employ different techniques.
4.1
Problem Setting and Model Assumptions
Our formal set up is as follows. The instance space is the d-dimensional hypercube X = [0, 1]d . We
use boldface x ? X to denote an instance and xij for the j-th dimension of instance xi . The binary
label y is determined by the threshold 21 in the first dimension: yi = 1{xi1 ? 21 } . This formulation
idealizes our empirical study where the continuous rating is the first dimension. It implies that the
target concept is unrelated to any of the other d?1 features. In practice, however, there may be other
5
features that are correlated with the target concept. But our analysis carries through by replacing d
with the number of irrelevant dimensions.
Departing from classic teaching models, we consider a ?pool-based sequential? teaching setting.
In this setting, a pool of n instances are sampled iid x1 , . . . , xn ? p(x), where we assume that
p(x) is uniform on X for simplicity. Their labels y1 . . . yn may be viewed as being sampled from
the conditional distribution p(yi = 1 | xi ) = 1{xi1 > 12 } . The teacher can only sequentially teach
instances selected from the pool (e.g., in our empirical study, the pool consists of the 29 objects).
Her goal is for the learner to generalize well on test instances outside the pool (also sampled from
p(x, y) = p(x)p(y | x)) after each iteration.
At this point, we make two strong assumptions on the learner. First, we assume that the learner
entertains axis-parallel hypotheses. That is, each hypothesis has the form hk?s (x) = 1{s(x?k ??)?0}
for some dimension k ? {1 . . . d}, threshold ? ? [0, 1], and orientation s ? {?1, 1}. The cognitive interpretation of an axis-parallel hypothesis is that the learner attends to a single dimension at
any given time.2 As in classic teaching models, our learner is consistent (i.e., it never contradicts
with the teaching instances it receives). The version space V (t) of the learner, i.e., the set of hypotheses that is consistent with the teaching sequence (x1 , y1 ), . . . , (xt , yt ) so far, takes the form
V (t) = ?dk=1 Vk (t) where Vk (t) = {hk?,1 | maxj:yj =0 xjk ? ? ? minj:yj =1 xjk } ? {hk?,?1 |
maxj:yj =1 xjk ? ? ? minj:yj =0 xjk }. The version space can be thought of as the union of inner
intervals surviving the teaching examples.
Second, similar to the randomized learners in [2], our learner selects a hypothesis h uniformly from
the version space V (t), follows it until when h is no longer in V (t), and then randomly selects a
replacement hypothesis?a strategy known as ?win stay, lose shift? in cognitive psychology [5]. It
is thus a Gibbs classifier. In particular, the risk, defined as the expected 0-1 loss of the learner on
a test instance, is R(t) ? E(x,y)?p(x,y) Eh?V (t) 1{h(x)6=y} . We point out that our assumptions are
psychologically plausible and will greatly simplify the derivation below.
4.2
Starting with Extreme Teaching Instances is Asymptotically Optimal
We now show why starting with extreme teaching instances as in curriculum learning, as opposed
to the boundary strategy, is optimal under our setting. Specifically, we consider the problem of selecting an optimal teaching sequence of length t = 2, one positive and one negative, (x1 , 1), (x2 , 0).
Introducing the shorthand a ? x11 , b ? x21 , the teacher seeks a, b to minimize the risk:
min R(2)
a,b?[0,1]
(1)
Note that we allow a, b to take any value within their domains, which is equivalent to having an
infinite pool for the teacher to choose from. We will tighten it later. Also note that we assume the
teacher does not pay attention to irrelevant dimensions, whose feature values can then be modeled
by uniform random variables.
For any teaching sequence of length 2, the individual intervals of the version space are of size
|V1 (2)| = a ? b, |Vk (2)| = |x1k ? x2k | for k = 2 . . . d, respectively. The total size of
Pd
the version space is |V (2)| = a ? b + k=2 |x1k ? x2k |. Figure 5(a) shows that for all
h1?1 1 ? V1 (2), the decision boundary is parallel to the true decision boundary and the test
error is E(x,y)?p(x,y) 1{h1?1 1 (x)6=y} = |?1 ? 1/2|. Figure 5(b) shows that for all hk?k s ?
?dk=2 Vk (2), the decision boundary is orthogonal
boundary and the test
error
R to the true decision
Pd R max(x1k ,x2k ) 1
a
1
is 1/2. Therefore, we have R(2) = |V (2)| b |?1 ? 1/2|d?1 + k=2 min(x1k ,x2k ) 2 d?k =
Pd 1
1 1
1
1 2
1
2
(
?
b)
+
(a
?
)
+
|x
?
x
|
. Introducing the shorthand ck ? |x1k ?
1k
2k
k=2 2
|V (2)| 2 2
2
2
Pd
( 21 ?b)2 +(a? 12 )2 +c
. The intuition is that a pair of teachx2k |, c ? k=2 ck , one can write R(2) =
2(a?b+c)
ing instances lead to a version space V (2) consisting of one interval per dimension. A random
hypothesis selected from the interval in the first dimension V1 (2) can range from good (if ?1 is close
2
A generalization to arbitrary non-axis parallel linear separators is possible in theory and would be interesting. However, non-axis parallel linear separators (known as ?information integration? in psychology) are more
challenging for human learners. Consequently, our human teachers might not have expected the robot learner
to perform information integration either.
6
1
d=1000
d=100
d=12
d=2
0.9
1
1
0.8
0.7
x12
?2
x22
1
|V |
0.6
0.5
0.4
0.3
0.2
0 b ?1 1/2
a
1
0.1
0
1/2
1
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
iteration t
(a)
(b)
(c)
Figure 5: (a) A hypothesis h1?1 1 ? V1 (2) is parallel to the true decision boundary, with test error
|?1 ?1/2| (shaded area). (b) A hypothesis h2?2 s ? V2 (2) is orthogonal to the true decision boundary,
with test error 1/2 (shaded area). (c) Theoretical teaching sequences gradually shrink |V1 |, similar
to human behaviors.
to 1/2) to poor (?1 far away from 1/2), while one selected from ?dk=2 Vk (2) is always bad. The
teacher can optimize the risk by choosing the size of V1 (T ) related to the total version space size.
The optimal choice is specified by the following theorem.
Theorem 1. The minimum risk R(2) is achieved at a =
?
c2 +2c?c+1
,
2
b = 1 ? a.
Proof. First, we show that at the minimum a, b are symmetric around 1/2, i.e., b = 1 ? a. Suppose
not. Then, (a+b)/2 = 1/2+ for some 6= 0. Let a0 = a?, b0 = b?. Then,
( 12 ?b)2 +(a? 21 )2 +c?22
2(a?b+c)
( 12 ?b0 )2 +(a0 ? 12 )2 +c
2(a0 ?b0 +c)
=
( 21 ?b)2 +(a? 12 )2 +c
2(a?b+c)
<
the minimum, a contradiction. Next, substituting b =
1 ? a in R(2) and setting the derivative w.r.t. a to 0 proves the theorem.
Recall that c is the size of the part of the version space in irrelevant dimensions. When d ? ?,
c ? ? and the solution is a = 1, b = 0. Here, the learner can form so many bad hypotheses in the
many wrong dimensions that the best strategy for the teacher is to make V1 (2) as large as possible,
even though many hypotheses in V1 (2) have nonzero error.
Corollary 2. The minimizer to (1) is a = 1, b = 0 when the dimensionality d ? ?.
Proof. We characterize the distribution of ck by considering the distance between two random variables x1k , x2k sampled uniformly in [0, 1]. Let z(1) , z(2) be the values of x1k , x2k sorted in an
ascending order. Then ck = z(2) ? z(1) is an instance of order statistics [6]. One can show
that, in general with t independent unif[0, 1] random variables sorted in an ascending order as
z(1) , . . . , z(j) , z(j+1) , . . . , z(t) , the distance z(j+1) ? z(j) follows a Beta(1, t) distribution. In our
case with t = 2, ck ? Beta(1, 2), whose mean is 1/3 as expected. It follows that c is the sum of
d ? 1 independent Beta random
variables. As d ? ?,
c ? ?. Let ? = 1/c. Applying l?H?opital?s
?
?
2
1+2??1+?
=
lim
= 1.
rule, limc?? a = limc?? c +2c?c+1
??0
2
2?
Corollary 2 has an interesting cognitive interpretation; the teacher only needs to pay attention to the
relevant (first) dimension x11 , x21 when selecting the two teaching instances. She does not need to
consider the irrelevant dimensions, as those will add up to a large c, which simplifies the teacher?s
task in choosing a teaching sequence; she simply picks two extreme instances in the first dimension.
We also note that in practice d does not need to be very large for a to be close to 1. For example,
with d = 10 dimensions, the average c is 31 (d ? 1) = 3 and the corresponding a = 0.94, with
d = 100, a = 0.99. This observation provides further psychological plausibility to our model.
So far, we have assumed an infinite pool, such that the teacher can select the extreme teaching
instances with x11 = 1, x21 = 0. In practice, the pool is finite and the optimal a, b values specified
in Theorem 1 may not be attainable within the pool. However, it is straightforward to show that
limc?? R0 (t) < 0 where the derivative is w.r.t. a after substituting b = 1 ? a. That is, in the
case of c ? ?, the objective in (1) is a monotonically decreasing function of a. Therefore, the
optimal strategy for a finite pool is to choose the negative instance with the smallest x?1 value and
7
the positive instance with the largest x?1 value. Note the similarity to curriculum learning which
starts with extreme (easy) instances.
4.3
The Teaching Sequence should Gradually Approach the Boundary
Thus far, we have focused on choosing the first two teaching instances. We now show that, as
teaching continues, the teacher should choose instances with a and b gradually approaching 1/2.
This is a direct consequence of minimizing the risk R(t) at each iteration, as c decreases to 0. In this
section, we study the speed by which c decreases to 0 and a to 1/2.
Consider the moment when the teacher has already presented a teaching sequence
(x1 , y1 ), . . . , (xt?2 , yt?2 ) and is about to select the next pair of teaching instances (xt?1 , 1), (xt , 0).
Teaching with pairs is not crucial but will simplify the analysis. Following the discussion after Corollary 2, we assume that the teacher only pays attention to the first dimension when selecting teaching
instances. This assumption allows us to again model the other dimensions as random variables. The
teacher wishes to determine the optimal a = xt?1,1 , b = xt,1 values according to Theorem 1. What
is the value of c for a teaching sequence of length t?
Theorem 3. Let the teaching sequence contain t0 negative labels and t ? t0 positive
ones. Then
the random variables ck = ?k ?k , where ?k ? Bernoulli 2/ tt0 , 1 ? 2/ tt0 (with values 1, 0
respectively) and ?k ? Beta(1, t) independently for k = 2 . . . d. Consequently, E(c) = 2(d?1)
.
t
t0 (1+t)
Proof. We show that for each irrelevant dimension k = 2 . . . d, after t teaching instances, |Vk (t)| =
?k ?k . As mentioned above, these t teaching instances can be viewed as unif[0, 1] random variables
in the kth dimension. Sort the values x1k , . . . , xtk in ascending order. Denote the sorted values
as z(1) , . . . , z(t) . Vk (t) is non-empty only if the labels happen to be linearly separable, i.e., either
z(1) . . . z(t0 ) having negative labels while the rest having positive labels or the other way around.
Consider the corresponding analogy where one randomly selects a permutation of t items (there are
t! permutations), such that the selected permutation has first t0 items with negative labels and the rest
with positive labels (there are t0 !(t ? t0 )! such permutations). This probability corresponds to ?k .
When Vk (t) is nonempty, its size |Vk (t)| is characterized by the order statistics z(t0 +1) ?z(t0 ) , which
corresponds to the Beta random variable ?k as mentioned earlier in the proof of Corollary 2.
As the binomial coefficient in the denominator of E(c) suggests, c decreases to 0 rapidly with t,
because t randomly-placed labels in 1D are increasingly unlikely to be linearly separable. Following
Theorem 1, the corresponding optimal a, b approach 1/2. Due to the form of Theorem 1, the pace is
slower. To illustrate how fast the optimal teaching sequence approaches 1/2 in the first dimension,
Figure 5(c) shows a plot of |V1 | = a ? b as a function of t by using E(c) in Theorem 1 (note in
general that this is not E(|V1 |), but only a typical value). We set t0 = t/2. This plot is similar to the
one we produced from human behavioral data in Figure 2(b). For comparison, that plot is copied
here in the background. Because the effective number of independent dimensions d is unknown, we
present several curves for different d?s. Some of these curves provide a qualitatively reasonable fit
to human behavior, despite the fact that we made several simplifying model assumptions.
5
Conclusion and Future Work
We conducted a human teaching experiment and observed three distinct human teaching strategies.
Empirical results yielded no evidence for the boundary strategy but showed that the extreme
strategy is consistent with the curriculum learning principle. We presented a theoretical framework
that extends teaching dimension and explains two defining properties of the extreme strategy: (1)
teaching starts with extreme instances and (2) teaching gradually approaches the decision boundary.
Our framework predicts that, in the absence of irrelevant dimensions (d = 1), teaching should start
at the decision boundary. To verify this prediction, in our future work, we plan to conduct additional
human teaching studies where the objects have no irrelevant attributes. We also plan to further
investigate and explain the linear strategy and the positive-only strategy that we observed in
our current study.
Acknowledgments: We thank Li Zhang and Eftychios Sifakis for helpful comments. Research supported by
NSF IIS-0953219, IIS-0916038, AFOSR FA9550-09-1-0313, Wisconsin Alumni Research Foundation, and
Mitsubishi Heavy Industries, Ltd.
8
References
[1] D. Angluin. Queries revisited. Theoretical Computer Science, 313(2):175?194, 2004.
[2] F. J. Balbach and T. Zeugmann. Teaching randomized learners. In Proceedings of the 19th Annual
Conference on Computational Learning Theory (COLT), pages 229?243. Springer, 2006.
[3] F. J. Balbach and T. Zeugmann. Recent developments in algorithmic teaching. In Proceedings of the 3rd
International Conference on Language and Automata Theory and Applications, pages 1?18, 2009.
[4] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In L. Bottou and M. Littman,
editors, Proceedings of the 26th International Conference on Machine Learning, pages 41?48, Montreal,
June 2009. Omnipress.
[5] J. S. Bruner, J. J. Goodnow, and G. A. Austin. A Study of Thinking. New York: Wiley, 1956.
[6] H. A. David and H. N. Nagaraja. Order Statistics. Wiley, 3rd edition, 2003.
[7] S. De Deyne and G. Storms. Word associations: Network and semantic properties. Behavior Research
Methods, 40:213?231, 2008.
[8] S. De Deyne and G. Storms. Word associations: Norms for 1,424 Dutch words in a continuous task.
Behavior Research Methods, 40:198?205, 2008.
[9] S. Goldman and M. Kearns. On the complexity of teaching. Journal of Computer and Systems Sciences,
50(1):20?31, 1995.
[10] S. Goldman and H. Mathias. Teaching a smarter learner. Journal of Computer and Systems Sciences,
52(2):255267, 1996.
[11] S. Hanneke. Teaching dimension and the complexity of active learning. In Proceedings of the 20th Annual
Conference on Computational Learning Theory (COLT), page 6681, 2007.
[12] T. Heged?us. Generalized teaching dimensions and the query complexity of learning. In Proceedings of
the eighth Annual Conference on Computational Learning Theory (COLT), pages 108?117, 1995.
[13] H. Kobayashi and A. Shinohara. Complexity of teaching by a restricted number of examples. In Proceedings of the 22nd Annual Conference on Computational Learning Theory (COLT), pages 293?302,
2009.
[14] M. P. Kumar, B. Packer, and D. Koller. Self-paced learning for latent variable models. In NIPS, 2010.
[15] Y. J. Lee and K. Grauman. Learning the easy things first: Self-paced visual category discovery. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
[16] B. D. McCandliss, J. A. Fiez, A. Protopapas, M. Conway, and J. L. McClelland. Success and failure in
teaching the [r]-[l] contrast to Japanese adults: Tests of a Hebbian model of plasticity and stabilization in
spoken language perception. Cognitive, Affective, & Behavioral Neuroscience, 2(2):89?108, 2002.
[17] J. P. Salmon, P. A. McMullen, and J. H. Filliter. Norms for two types of manipulability (graspability and
functional usage), familiarity, and age of acquisition for 320 photographs of objects. Behavior Research
Methods, 42(1):82?95, 2010.
[18] S. Zilles, S. Lange, R. Holte, and M. Zinkevich. Models of cooperative teaching and learning. Journal of
Machine Learning Research, 12:349?384, 2011.
9
| 4466 |@word trial:2 version:11 norm:2 nd:1 unif:2 seek:1 mitsubishi:2 simplifying:1 irb:1 attainable:1 paid:1 pick:1 accommodate:1 carry:1 moment:1 contains:1 pub:1 selecting:3 subjective:2 current:1 assigning:1 written:1 happen:1 predetermined:1 plasticity:1 wanted:1 plot:5 designed:1 alone:1 cue:1 selected:4 item:2 short:1 fa9550:1 provides:2 revisited:1 zhang:1 along:2 c2:1 direct:1 become:2 beta:5 consists:1 shorthand:2 manipulability:1 combine:1 behavioral:7 affective:1 inter:1 expected:7 behavior:9 nor:1 examine:1 approval:1 inspired:1 decreasing:1 goldman:2 food:1 considering:1 provided:2 campus:1 unrelated:1 p28:1 what:3 p02:1 minimizes:1 developed:1 contrasting:1 informed:1 spoken:1 guarantee:1 every:1 shed:1 exactly:1 grauman:1 classifier:1 wrong:1 control:2 unit:1 farthest:1 yn:3 before:2 humanlike:1 positive:11 local:1 kobayashi:1 limit:1 consequence:1 despite:1 id:1 subscript:1 approximately:1 might:4 chose:3 black:2 plus:1 studied:2 suggests:2 challenging:1 shaded:2 programmed:1 range:2 acknowledgment:1 yj:6 practice:3 union:1 differs:1 procedure:1 grasped:1 area:3 empirical:7 significantly:1 printed:1 thought:2 word:3 vocal:1 close:2 risk:6 applying:1 writing:1 optimize:1 equivalent:1 zinkevich:1 yt:3 layout:1 attention:3 starting:2 duration:1 straightforward:1 focused:1 independently:1 automaton:1 simplicity:1 contradiction:1 rule:2 limc:3 classic:4 justification:1 target:7 suppose:1 hypothesis:12 agreement:1 associate:2 deyne:2 recognition:1 particularly:1 bilge:2 continues:1 lay:1 native:1 labeled:6 predicts:2 observed:7 cooperative:1 worst:1 commonplace:1 region:3 decrease:4 mentioned:3 subtask:2 pd:4 broken:1 intuition:1 complexity:4 asked:2 littman:1 senseless:1 learner:22 differently:1 various:1 listener:1 derivation:1 train:1 articulated:1 distinct:2 fast:1 describe:1 effective:1 query:2 labeling:1 outcome:1 h0:1 outside:1 choosing:3 whose:4 heuristic:2 plausible:1 cvpr:1 say:1 ability:1 statistic:3 seemingly:1 sequence:24 interview:1 propose:1 interaction:2 turned:1 relevant:1 rapidly:1 iff:1 achieve:1 description:1 everyday:1 empty:1 optimum:1 tract:1 leave:1 perfect:1 object:27 attends:1 illustrate:1 montreal:1 b0:3 progress:2 strong:1 c:2 predicted:1 come:1 indicate:2 involves:1 implies:1 posit:1 attribute:1 quartile:1 exploration:1 human:30 stabilization:1 material:1 education:1 mccandliss:2 explains:1 generalization:2 extension:1 underpinning:1 around:5 great:1 algorithmic:1 predict:3 substituting:2 major:1 continuum:1 smallest:1 purpose:2 lose:1 label:20 title:1 largest:1 create:1 successfully:1 always:1 rather:1 ck:6 avoid:1 hj:5 corollary:4 focus:2 june:1 she:4 longest:1 consistently:1 vk:9 bernoulli:1 hk:4 contrast:2 rigorous:1 greatly:1 baseline:1 helpful:1 unlikely:1 a0:3 p16:2 her:1 koller:1 labelings:1 interested:1 selects:3 x11:3 bruner:1 classification:3 among:1 aforementioned:1 orientation:1 colt:4 development:2 animal:1 constrained:19 special:1 integration:2 plan:2 extraneous:1 balbach:2 shinohara:1 never:1 nicely:1 having:3 manually:1 represents:1 thinking:1 mimic:1 future:2 others:1 simplify:2 employ:2 few:2 randomly:4 packer:1 individual:1 maxj:3 consisting:1 replacement:1 highly:2 investigate:1 extreme:22 light:1 held:1 x22:1 partial:2 necessary:1 experience:1 orthogonal:2 conduct:2 confusable:1 circle:1 p03:1 xjk:4 theoretical:9 p13:1 psychological:1 instance:37 industry:2 p17:1 earlier:1 introducing:2 jerryzhu:2 uniform:2 conducted:2 characterize:1 teacher:20 st:1 international:2 randomized:2 stay:1 lee:1 xi1:2 pool:10 gaze:1 conway:1 again:1 opposed:1 hn:1 choose:3 cognitive:7 creating:1 derivative:2 li:1 manufactured:1 account:1 potential:2 converted:1 de:2 speculate:1 coefficient:1 goodnow:1 collobert:1 later:2 h1:3 picked:2 mutlu:1 red:4 start:10 sort:1 participant:33 parallel:6 nagaraja:1 elaborated:1 contribution:1 minimize:2 responded:1 phoneme:1 who:1 maximized:1 inch:1 generalize:1 produced:1 iid:1 none:2 hanneke:1 explain:1 minj:3 p31:1 failure:1 nonetheless:1 p15:1 acquisition:1 involved:2 storm:2 naturally:1 graspable:16 proof:4 sampled:4 recall:1 lim:1 dimensionality:1 actually:1 back:2 follow:8 inspire:1 formulation:1 shrink:2 though:2 box:2 until:1 hand:3 receives:1 replacing:1 indicated:1 perhaps:1 tt0:2 believe:1 usage:1 usa:1 facilitate:1 verify:1 concept:14 building:2 true:4 contain:1 hence:1 assigned:4 p29:1 alternating:1 symmetric:1 nonzero:1 alumnus:1 semantic:1 illustrated:3 white:1 during:1 self:2 ambiguous:4 speaker:1 generalized:1 prominent:2 impression:1 necessitating:1 motion:1 p12:1 upwards:1 omnipress:1 image:1 novel:2 salmon:2 enriches:1 common:1 functional:1 empirically:1 association:2 interpretation:2 gibbs:1 tuning:1 rd:3 consistency:1 teaching:117 language:4 had:4 dot:1 p20:1 robot:16 longer:1 similarity:1 add:1 closest:2 own:1 recent:3 showed:3 irrelevant:7 binary:5 opital:1 success:1 yi:2 seen:2 minimum:5 captured:1 p10:1 additional:1 holte:1 r0:1 determine:1 monotonically:1 ii:3 filliter:1 multiple:2 sound:1 reduces:1 hebbian:1 ing:1 characterized:1 plausibility:1 offer:6 long:1 believed:1 post:1 prediction:5 simplistic:1 denominator:1 vision:1 dutch:1 iteration:8 faisal:2 ruler:2 represent:2 psychologically:1 achieved:1 smarter:1 justified:1 background:1 interval:6 median:1 crucial:1 rest:2 comment:1 isolate:1 subject:1 recruited:1 p01:2 thing:1 call:1 surviving:1 near:1 bengio:1 easy:5 xj:11 fit:1 psychology:7 gave:1 misclassifications:1 competing:1 approaching:1 inner:2 idea:1 simplifies:1 lange:1 shift:1 eftychios:1 t0:10 whether:3 thread:4 motivated:1 tgz:1 ltd:2 x1k:8 york:1 afford:1 clear:4 involve:2 category:1 mcclelland:1 http:1 continuation:1 angluin:1 affords:1 xij:1 nsf:1 zeugmann:2 heged:1 sign:1 neuroscience:1 per:2 pace:1 blue:4 write:1 taught:1 threshold:7 p30:2 p11:1 wisc:2 neither:1 v1:14 asymptotically:1 year:1 sum:1 extends:3 throughout:1 place:1 reasonable:1 decision:18 x2k:6 pay:3 paced:2 followed:3 distinguish:1 furniture:1 copied:1 yielded:1 annual:4 sharply:1 x2:2 felt:2 interpolated:1 speed:1 argument:1 span:1 min:2 kumar:1 performing:1 separable:2 x12:1 xtk:1 department:1 according:2 poor:2 rationalization:1 across:1 increasingly:1 contradicts:1 wi:1 intuitively:1 gradually:7 restricted:1 computationally:1 discus:1 nonempty:1 tractable:1 ascending:3 serf:1 photo:3 end:2 studying:1 available:2 v2:1 enforce:1 away:1 slower:1 assumes:1 binomial:1 completed:1 x21:3 opportunity:1 madison:3 especially:1 build:1 prof:1 hypercube:1 move:1 objective:1 question:2 norming:1 already:1 strategy:53 win:1 kth:1 distance:2 thank:1 card:14 p22:1 evenly:1 toward:2 p23:1 boldface:1 length:4 modeled:1 providing:2 minimizing:3 teach:7 p18:1 negative:7 design:1 unknown:1 perform:1 vertical:1 observation:2 finite:2 situation:1 defining:1 looking:1 y1:6 arbitrary:1 community:2 subtasks:2 rating:9 david:1 pair:4 specified:2 khan:1 conflict:1 distinction:1 conflicting:1 learned:1 expressivity:1 nip:1 adult:1 able:1 suggested:2 below:3 exemplified:2 pattern:1 eighth:1 perception:1 max:1 explanation:2 natural:23 eh:1 curriculum:15 zhu:1 rated:1 brief:1 fiez:1 axis:7 started:2 xiaojin:1 embodied:1 entertains:1 review:1 understanding:2 discovery:1 wisconsin:3 afosr:1 loss:1 mcmullen:1 permutation:4 mixed:1 interesting:2 analogy:1 age:2 foundation:2 h2:1 agent:2 consistent:11 principle:6 editor:1 heavy:2 austin:1 placed:1 supported:1 english:3 verbal:1 side:1 bias:1 formal:1 allow:1 zilles:1 departing:1 boundary:26 dimension:44 xn:11 world:1 feedback:1 p26:1 rich:1 curve:2 instructed:2 made:2 qualitatively:1 far:4 tighten:1 keep:1 overfitting:1 sequentially:1 active:1 assumed:1 xi:4 alternatively:1 spectrum:1 continuous:3 latent:1 why:3 table:5 learn:1 bottou:1 japanese:2 complex:1 separator:2 domain:1 louradour:1 did:2 main:1 linearly:3 whole:1 edition:1 allowed:2 facilitating:1 categorized:1 x1:13 positively:1 wiley:2 fails:1 position:2 wish:1 down:1 theorem:9 bad:2 specific:1 xt:7 familiarity:1 dk:3 evidence:2 sequential:1 effectively:1 interactional:1 sorting:1 photograph:1 simply:1 explore:1 visual:1 toothbrush:2 graspability:11 springer:1 corresponds:4 minimizer:1 weston:1 conditional:1 goal:3 sorted:7 marked:1 viewed:2 consequently:2 room:2 absence:1 hard:3 change:1 specifically:2 determined:1 uniformly:2 infinite:2 typical:1 contradictory:1 kearns:1 called:1 total:2 mathias:1 partly:1 experimental:1 tendency:1 exception:1 select:2 people:1 mark:1 latter:1 p21:1 correlated:1 |
3,829 | 4,467 | ICA with Reconstruction Cost for Efficient
Overcomplete Feature Learning
Quoc V. Le, Alexandre Karpenko, Jiquan Ngiam and Andrew Y. Ng
{quocle,akarpenko,jngiam,ang}@cs.stanford.edu
Computer Science Department, Stanford University
Abstract
Independent Components Analysis (ICA) and its variants have been successfully
used for unsupervised feature learning. However, standard ICA requires an orthonoramlity constraint to be enforced, which makes it difficult to learn overcomplete features. In addition, ICA is sensitive to whitening. These properties make
it challenging to scale ICA to high dimensional data. In this paper, we propose a
robust soft reconstruction cost for ICA that allows us to learn highly overcomplete
sparse features even on unwhitened data. Our formulation reveals formal connections between ICA and sparse autoencoders, which have previously been observed
only empirically. Our algorithm can be used in conjunction with off-the-shelf fast
unconstrained optimizers. We show that the soft reconstruction cost can also be
used to prevent replicated features in tiled convolutional neural networks. Using
our method to learn highly overcomplete sparse features and tiled convolutional
neural networks, we obtain competitive performances on a wide variety of object
recognition tasks. We achieve state-of-the-art test accuracies on the STL-10 and
Hollywood2 datasets.
1
Introduction
Sparsity has been shown to work well for learning feature representations that are robust for object
recognition [1, 2, 3, 4, 5, 6, 7]. A number of algorithms have been proposed to learn sparse features. These include: sparse auto-encoders [8], Restricted Boltzmann Machines (RBMs) [9], sparse
coding [10] and Independent Component Analysis (ICA) [11]. ICA, in particular, has been shown
to perform well in a wide range of object recognition tasks [12]. In addition, ISA (Independent
Subspace Analysis, a variant of ICA) has been used to learn features that achieved state-of-the-art
performance on action recognition tasks [13].
However, standard ICA has two major drawbacks. First, it is difficult to learn overcomplete feature
representations (i.e., the number of features cannot exceed the dimensionality of the input data). This
puts ICA at a disadvantage compared to other methods, because Coates et al. [6] have shown that
classification performance improves for algorithms such as sparse autoencoders [8], K-means [6]
and RBMs [9], when the learned features are overcomplete. Second, ICA is sensitive to whitening
(a preprocessing step that decorrelates the input data, and cannot always be computed exactly for
high dimensional data). As a result, it is difficult to scale ICA to high dimensional data. In this paper
we propose a modification to ICA that not only addresses these shortcomings but also reveals strong
connections between ICA, sparse autoencoders and sparse coding.
Both drawbacks arise from a constraint in the standard ICA formulation that requires features to be
orthogonal. This hard orthonormality constraint, W W T = I, is used to prevent degenerate solutions
in the feature matrix W (where each feature is a row of W ). However, if W is overcomplete (i.e., a
?tall? matrix) then this constraint can no longer be satisfied. In particular, the standard optimization
procedure for ICA, ISA and TICA (Topographic ICA) uses projected gradient descent, where W is
1
1
orthonormalized at each iteration by solving W := (W W T )? 2 W . This symmetric orthonormalization procedure does not work when W is overcomplete. As a result, this standard ICA method
can not learn more features than the number of dimensions in the data. Furthermore, while alternative orthonormalization procedures or score matching can learn overcomplete representations, they
are expensive to compute. Constrained optimizers also tend to be much slower than unconstrained
ones.1
Our algorithm enables ICA to scale to overcomplete representations by replacing the orthonormalization constraint with a linear reconstruction penalty (akin to the one used in sparse auto-encoders).
This reconstruction penalty removes the need for a constrained optimizer. As a result, we can implement our algorithm with only a few lines of MATLAB, and plug it directly into unconstrained
solvers (e.g., L-BFGS and CG [14]). This results in very fast convergence rates for our method.
In addition, recent ICA-based algorithms, such as tiled convolutional neural networks (also known as
local receptive field TICA) [12], also suffer from the difficulty of enforcing the hard orthonormality
constraint globally. As a result, orthonormalization is typically performed locally instead, which
results in copied (i.e., degenerate) features. Our reconstruction penalty, on the other hand, can be
enforced globally across all receptive fields. As a result, our method prevents degenerate features.
Furthermore, ICA?s sensitivity to whitening is undesirable because exactly whitening high dimensional data is often not feasible. For example, exact whitening using principal component analysis
(PCA) for input images of size 200x200 pixels is challenging, because it requires solving the eigendecomposition of a 40,000 x 40,000 covariance matrix. Other methods, such as sparse autoencoders
or RBMs, work well using approximate whitening and in some cases work even without any whitening. Standard ICA, on the other hand, tends to produce noisy filters unless the data is exactly white.
Our soft-reconstruction penalty shares the property of auto-encoders, in that it makes our approach
also less sensitive to whitening. Similarities between ICA, auto-encoders and sparse coding have
been observed empirically before (i.e., they all learn edge filters). Our contribution is to show a
formal proof and a set of conditions under which these algorithms are equivalent.
Finally, we use our algorithm for classifying STL-10 images [6] and Hollywood2 [15] videos. In
particular, on the STL-10 dataset, we learn highly overcomplete representations and achieve 52.9%
on the test set. On Hollywood2, we achieve 54.6 Mean Average Precision, which is also the best
published result on this dataset.
2
Standard ICA and Reconstruction ICA
We begin by introducing our proposed algorithm for overcomplete ICA. In subsequent sections
we will show how our method is related to ICA, sparse auto-encoders and sparse coding. Given
(i)
unlabeled data {x(i) }m
? Rn , regular ICA [11] is traditionally defined as the following
i=1 , x
optimization problem:
minimize
W
m X
k
X
g(Wj x(i) ), subject to W W T = I
(1)
i=1 j=1
where g is a nonlinear convex function, e.g., smooth L1 penalty: g(.) := log(cosh(.)) [16], W is
the weight matrix W ? Rk?n , k is number of components (features), and Wj is one row (feature)
in W . The orthonormality constraint W W T = I is used to prevent the bases in W from becoming
degenerate. We refer to this as ?non-degeneracy control? in this paper.
Pm (i)
= 0, and unit covariance,
Typically, ICA requires data to have zero mean,
i=1 x
P
m
1
(i) (i) T
x
(x
)
=
I.
While
the
former
can
be
achieved
by
subtracting
the empirical mean,
i=1
m
the latter requires finding a linear transformation by solving the eigendecomposition of the covariance matrix [11]. This preprocessing step is also known as whitening or sphering the data.
For overcomplete representations (k > n) [17, 18], the orthonormality constraint can no longer
hold. As a result, approximate orthonormalization (e.g., Gram-Schmidt) or fixed-point iterative
1
FastICA is a specialized solver that works well for complete or undercomplete ICA. Here, we focus our
attention on ICA and its variants such as ISA and TICA in the context of overcomplete representations, where
FastICA does not work.
2
methods [11] have been proposed. These algorithms are often slow and require tuning. Other
approaches, e.g., interior point methods [19] or score matching [16] exist, but they are complicated
to implement and also slow. Score matching, for example, is difficult to implement and expensive
for multilayered algorithms like ISA or TICA, because it requires backpropagation of a Hessian
matrix.
These challenges motivate our search for a better type of non-degeneracy control for ICA. A frequently employed form of non-degeneracy control in auto-encoders and sparse coding is the use
of reconstruction costs. As a result, we propose to replace the hard orthonormal constraint in ICA
with a soft reconstruction cost. Applying this change to eq. 1, produces the following unconstrained
problem:
m
m
k
Reconstruction ICA (RICA): minimize
W
XX
? X
kW T W x(i) ? x(i) k22 +
g(Wj x(i) )
m i=1
i=1 j=1
(2)
We use the term ?reconstruction cost? for this smooth penalty because it corresponds to the reconstruction cost of a linear autoencoder, where the encoding weights and decoding weights are tied
(i.e., the encoding step is W x(i) and the decoding step is W T W x(i) ).
The choice to swap the orthonormality constraint with a reconstruction penalty seems arbitrary at
first. However, we will show in the following section that these two forms of degeneracy control
are, in fact, equivalent under certain conditions. Furthermore, this change has two key benefits: first,
it allows unconstrained optimizers (e.g., L-BFGS, CG [20] and SGDs) to be used to minimize this
cost function instead of relying on slower constrained optimizers (e.g., projected gradient descent)
to solve the standard ICA cost function. And second, the reconstruction penalty works even when
W is overcomplete and the data not fully white.
3
Connections between orthonormality and reconstruction
Sparse autoencoders, sparse coding and ICA have been previously suspected to be strongly connected because they learn edge filters for natural image data. In this section we present formal
proofs that they are indeed mathematically equivalent under certain conditions (e.g., whitening and
linear coding). Our proofs reveal the underlying principles in unsupervised feature learning that tie
these algorithms together.
We start by reviewing the optimization problems of two common unsupervised feature learning
algorithms: sparse autoencoders and sparse coding. In particular, the objective function of tiedweight sparse autoencoders
[8, 21, 22, 23] is:
m
minimize
W,b,c
? X
k?(W T ?(W x(i) + b) + c) ? x(i) k22 + S({W, b}, x(1) , . . . , x(m) )
m i=1
(3)
where ? is the activation function (e.g., sigmoid), b, c are biases, and S is some sparse penalty
function. Typically, S is chosen to be the smooth L1 penalty S({W, b}, x(i) , . . . , x(m) ) =
Pm P k
(i)
i=1
j=1 g(Wj x ) or KL divergence between the average activation and target activation [24].
Similarly, the optimization problem of Sparse coding [10] is:
minimize
W,z (1) ,...,z (m)
m
m X
k
X
? X
(i)
kW T z (i) ? x(i) k22 +
g(zj ) subject to
m i=1
i=1 j=1
kWj k22 ? c, ?j = 1, . . . , k. (4)
From these formulations, it is clear there are links between ICA, RICA, sparse autoencoders and
sparse coding. In particular, most methods use the L1 sparsity penalty and, except for ICA, most use
reconstruction costs as a non-degeneracy control. These observations are summarized in Table 1.
ICA?s main distinction compared to sparse coding and autoencoders is its use of the hard orthonormality constraint in lieu of reconstruction costs. However, we will now present a proof (consisting
of two lemmas) that derives the relationship between ICA?s orthonormality constraint and RICA?s
reconstruction cost. We subsequently present a set of conditions under which RICA is equivalent to
sparse coding and autoencoders. The result is a novel and formal proof of the relationship between
ICA, sparse coding and autoencoders.
We let I denote an identity matrix, and Il an identity matrix of size l ? l. We denote the L2 norm
by k.k2 and the matrix Frobenius norm by k.kF . We also assume that the data {x(i) }m
i=1 has zero
mean.
3
Table 1: A summary of different unsupervised feature learning methods. ?Non-degeneracy control? refers to the mechanism that prevents all bases from learning uninteresting weights (e.g., zero
weights or identical weights). Note that using sparsity is optional in autoencoders.
Algorithm
Sparse coding [10]
Autoencoders and
Denoising autoencoders [21]
ICA [16]
RICA (this paper)
Sparsity
L1
Optional: KL [24]
or L1 [22]
L1
L1
Non-degeneracy control
L2 reconstruction
L2 reconstruction
(or cross entropy [21, 8])
Orthonormality
L2 reconstruction
Activation function
Implicit
Sigmoid
Linear
Linear
The first lemma states that the reconstruction cost and column orthonormality cost2 are equivalent
when data is whitened (see the Appendix in the supplementary material for proofs):
Lemma
data {x(i) }m
is whitened, the reconstruction
i=1
Pm 3.1 TWhen(i) the (i)input
?
2
kW
W
x
?
x
k
is
equivalent
to
the
orthonormality
cost ?kW T W ? Ik2F .
2
i=1
m
cost
Our second lemma states that minimizing column orthonormality and row orthonormality costs turns
out to be equivalent due to a property of the Frobenius norm:
Lemma 3.2 The column orthonormality cost ?kW T W ? In k2F is equivalent to the row orthonormality cost ?kW W T ? Ik k2F up to an additive constant.
Together these two lemmas tell us that reconstruction cost is equivalent to both column and row orthonormality cost for whitened data. Furthermore, as ? approaches infinity the orthonormality cost
becomes the hard orthonormality constraint of ICA (see equations 1 & 2) if W is complete or undercomplete. Thus, ICA?s hard orthonormality constraint and RICA?s reconstruction cost are related
under these conditions. More formally, the following remarks explain this conclusion, and describe
the set of conditions under which RICA (and by extension ICA) is equivalent to autoencoders and
sparse coding.
1) If the data is whitened, RICA is equivalent to ICA for undercomplete representations and ?
approaching infinity. For whitened data our RICA formulation:
RICA: minimize
W
m
m X
k
X
? X
kW T W x(i) ? x(i) k22 +
g(Wj x(i) )
m i=1
i=1 j=1
(5)
is equivalent (from the above lemmas) to:
minimize ?kW T W ? Ik2F +
W
m X
k
X
g(Wj x(i) ), and
(6)
m X
k
X
g(Wj x(i) )
(7)
i=1 j=1
minimize ?kW W T ? Ik2F +
W
i=1 j=1
Furthermore, for undercomplete representations, in the limit of ? approaching infinity, the orthonormalization costs above become hard constraints. As a result, they are equivalent to:
Conventional ICA:
m X
k
X
g(Wj x(i) ) subject to W W T = I
(8)
i=1 j=1
which is just plain ICA, or ISA/TICA with appropriate choices of the sparsity function g.
2) Autoencoders and Sparse Coding are equivalent to RICA if
? in autoencoders, we use a linear activation function ?(x) = x, ignore the biases b, c, use the
Pm P k
soft L1 sparsity for the activations: S({W, b}, x(i) , . . . , x(m) ) = i=1 j=1 g(Wj x(i) )
and
(i)
? in sparse coding, we use explicit encoding zj = Wj x(i) and ignore the norm ball constraints.
2
The column orthonormality cost is zero only if the columns of W are orthonormal.
4
Despite their equivalence, certain formulations have certain advantages. For instance, RICA (eq. 2)
and soft orthonormalization ICA (eq. 6 and 7) are smooth and can be optimized efficiently by fast
unconstrained solvers (e.g., L-BFGS or CG) while the conventional constrained ICA optimization
problem cannot. Soft penalties are also preferred if we want to learn overcomplete representations
where explicitly constraining W W T = I is not possible3 .
We derive an additional relationship in the appendix (see supplementary material), which shows that
for whitened data denoising autoencoders are equivalent to RICA with weight decay. Another interesting connection between RBMs and denoising autoencoders is derived in [25]. The connections,
between RBMs, autoencoders, denoising autoencoders and the fact that reconstruction cost captures
whitening (by the above lemmas), likely explains why whitening does not matter much for RBMs
and autoencoders in [6].
4
Effects of whitening on ICA and RICA
In practice, ICA tends to be much more sensitive to whitening compared to sparse autoencoders.
Running ICA on unwhitened data results in very noisy bases. In this section, we study empirically
how whitening affects ICA and our formulation, RICA.
We sampled 20000 patches of size 16x16 from a set of 11 natural images [16] and visualized the
filters learned using ICA and RICA with raw images, as well as approximately whitened images.
For approximate whitening, we use 1/f whitening with low pass filtering. This 1/f whitening transformation uses Fourier analysis of natural image statistics and produces transformed data which has
an approximate identity covariance matrix. This procedure does not require pretraining. As a result,
1/f whitening runs quickly and scales well to high dimensional data. We used the 1/f whitening
implementation described in [16].
(a) ICA on 1/f whitened images
(b) ICA on raw images
(c) RICA on 1/f whitened images
(d) RICA on raw images
Figure 1: ICA and RICA on approximately whitened and raw images. (a-b): Bases learned with
ICA. (c-d): Bases learned with RICA. RICA retains some structures of the data whereas ICA does
not (i.e., it learns noisy bases).
Figure 1 shows the results of running ICA and RICA on raw and 1/f whitened images. As can be
seen, ICA learns very noisy bases on raw data, as well as approximately whitened data. In contrast,
RICA works well for 1/f whitened data and raw data. Our quantitative analysis with kurtosis (not
shown due to space limits) agrees with visual inspection: RICA learns more kurtotic representations
than ICA on approximately whitened or raw data.
Robustness to approximate whitening is desirable, because exactly whitening high dimensional data
using PCA may not be feasible. For instance, PCA on images of size 200x200 requires computing
the eigendecomposition of a 40,000 x 40,000 covariance matrix, which is computationally expensive. With RICA, approximate whitening or raw data can be used instead. This allows our method
to scale to higher dimensional data than regular ICA.
5
Local receptive field TICA
The first application of our RICA algorithm that we examine is local receptive field neural networks. The motivation behind local receptive fields is computational efficiency. Specifically, rather
3
Note that when W is overcomplete, some rows may degenerate and become zero, because the reconstruction constraint can be satisfied with only a complete subset of rows. To prevent this, we employ an additional
norm ball constraint (see the Appendix for more details regarding L-BFGS and norm ball constraints).
5
than having each hidden unit connect to the entire input image, each unit is instead connected to a
small patch (see figure 2a for an illustration). This reduces the number of parameters in the model.
As a result, local receptive field neural networks are faster to optimize than their fully connected
counterparts. A major drawback of this approach, however, is the difficulty in enforcing orthogonality across partially overlapping patches. We show that swapping out locally enforced orthogonality
constraints with a global reconstruction cost solves this issue.
Specifically, we examine the local receptive field network proposed by Le et al. [12]. Their formulation constrains each feature (a row of W ) to connect to a small region of the image (i.e., all
weights outside of the patch are set to zero). This modification allows learning ICA and TICA with
larger images, because W is now sparse. Unlike standard convolutional networks, these networks
may be extended to have fully unshared weights. This permits them to learn invariances other than
translational invariances, which are hardwired in convolutional networks.
The pre-training step for the TCNN (local receptive field TICA) [12] is performed by minimizing
the following cost function:
minimize
W
m X
k q
X
? + Hj (W x(i) )2 , subject to W W T = I
(9)
i=1 j=1
where H is the spatial pooling matrix and W is a learned weight matrix. The corresponding neural
network representation
of this algorithm is one with two layers with weights W, H and nonlinearities
p
(.)2 and (.) respectively (see Figure 2a). In addition, W and H are set to be local. That is, each
row of W and H connects to a small region of the input data.
(a) Local receptive field neural net
(b) Local orthogonalization
(c) RICA global reconstruction cost
Figure 2: (a) Local receptive field neural network with fully untied weights. A single map consists
of local receptive fields that do not share a location (i.e., only different colored nodes). (b & c)
For illustration purposes we have brightened the area of each local receptive field within the input
image. (b) Hard orthonormalization [12] is applied at each location only (i.e., nodes of the same
color), which results in copied filters (for example, see the filters outlined in red; notice that the
location of the edge stays the same within the image even though the receptive field areas are different). (c) Global reconstruction (this paper) is applied both within each location and across locations
(nodes of the same and different colors), which prevents copying of receptive fields.
Enforcing the hard orthonormality constraint on the entire sparse W matrix is challenging because it
is typically overcomplete for TCNNs. As a result, Le et al. [12] performed local orthonormalization
instead. That is, only the features (rows of W ) that share a location (e.g., only the red nodes in figure
2) were orthonormalized using symmetric orthogonalization.
However, visualizing the filters learned by a TCNN with local orthonormalization, shows that many
adjacent receptive fields end up learning the same (copied) filters due to the lack of an orthonormality
constraint between them. For instance, the green nodes in Figure 2 may end up being copies of the
red nodes (see the copied receptive fields in Figure 2b).
In order to prevent copied features, we replace the local orthonormalization constraint with a global
reconstruction cost (i.e., computing the reconstruction cost kW T W x(i) ? x(i) k22 for the entire overcomplete sparse W matrix). Figure 2c shows the resulting filters. Figure 3 shows that the reconstruction penalty produces a better distribution of edge detector locations within the image patch
(this also holds true for frequencies and orientations).
6
1
0.8
0.8
Vertical Location
Vertical Location
1
0.6
0.4
0.2
0
0.6
0.4
0.2
0
0.2
0.4
0.6
Horizontal Location
0.8
0
1
0
0.2
0.4
0.6
Horizontal Location
0.8
1
Figure 3: Location of each edge detector within the image patch. Symbols of the same color/shape
correspond to a single map. Left: local orthonormalization constraint. Right: global reconstruction
penalty. The reconstruction penalty prevents copied filters, producing a more uniform distribution
of edge detectors.
6
Experiments
The following experiments compare the speed gains of RICA over standard overcomplete ICA. We
then use RICA to learn a large filter bank, and show that it works well for classification on the
STL-10 dataset.
6.1 Speed improvements for overcomplete ICA
In this experiment, we examine the speed performance of RICA and overcomplete ICA with score
matching [26]. We trained overcomplete ICA on 20000 gray-scale image patches, each patch of size
16x16. We learn representations that are 2x, 4x and 6x overcomplete. We terminate both algorithms
when changes in the parameter vector drop below 10?6 . We use the score matching implementation
provided in [16]. We report the time required to learn these representations in Table 2. The results
show that our method is much faster than the competing method. In particular, learning features that
are 6x overcomplete takes 1 hour using our method, whereas [26] requires 2 days.
Table 2: Speed improvements of our method over score matching [26].
Score matching ICA
RICA
Speed up
2x overcomplete
33000 seconds
1000 seconds
33x
4x overcomplete
65000 seconds
1600 seconds
40x
6x overcomplete
180000 seconds
3700 seconds
48x
Figure 4 shows the peak frequencies and orientations for 4x overcomplete bases learned using our
method. The learned bases do not degenerate, and they cover a broad range of frequencies and
orientations (cf. Figure 3 in [27]). This ability to learn a diverse set of features allows our algorithm
to perform well on various discriminative tasks.
Figure 4: Scatter plot of peak frequencies and orientations of Gabor functions fitted to the filters
learned by RICA on whitened images. Our model yields a diverse set of filters that covers the
spatial frequency space evenly.
6.2
Overcomplete ICA on STL-10 dataset
In this section, we evaluate the overcomplete features learned by our model. The experiments are
carried out on the STL-10 dataset [6] where overcomplete representations have been shown to work
well. The STL-10 dataset contains 96x96 pixel color images taken from 10 classes. For each
7
class 500 training images and 800 test images are provided. In addition, 100,000 unlabeled images
are included for unsupervised learning. We use RICA to learn overcomplete features on 100,000
randomly sampled color patches from the unlabeled images in the STL-10 dataset. We then apply
RICA to extract features from images in the same manner described in [6].
Using the same number of features (1600) employed by Coates et al. [6] on 96x96 images and 10x10
receptive fields, our soft reconstruction ICA achieves 52.9% on the test set. This result is slightly
better than (but within the error bars) of the best published result, 51.5%, obtained by K-means [6].
50
Cross?Validation Accuracy (%)
51.5
45
soft ica
40
ica
Coates et al.
35
whitened
raw
30
25
0
200
400
600
800
1000
Number of Features
1200
1400
1600
Figure 5: Classification accuracy on the STL-10 dataset as a function of the number of bases learned
(for a patch size of 8x8 pixels). The best result shown uses bases that are 8x overcomplete.
Finally, we compare classification accuracy as a function of the number of bases. Figure 5 shows
the results for ICA and RICA. Notice that the reconstruction cost in RICA allows us to learn overcomplete representations that outperform the complete representation obtained by the regular ICA.
6.3
Reconstruction Independent Subspace Analysis for action recognition
Recently we presented a system [13] for learning features from unlabelled data that can lead to
state-of-the-art performance on many challenging datasets such as Hollywood2 [15], KTH [28]
and YouTube [29]. This system makes use of a two-layered Independent Subspace Analysis (ISA)
network [16]. Like ICA, ISA also uses orthogonalization for degeneracy control.4
In this section we compare the effects of reconstruction versus orthogonality on classification performance using ISA. In our experiments we swap out the orthonormality constraint employed by
ISA with a reconstruction penalty. Apart from this change, the entire pipeline and parameters are
identical to the system described in [13].
We observe that the reconstruction penalty tends to works better than orthogonality constraints. In
particular, on the Hollywood2 dataset ISA achieves a mean AP of 54.6% when the reconstruction
penalty is used. The performance of ISA drops to 53.3% when orthogonality constraints are used.
Both results are state-of-the art resuls on this dataset [30]. We attribute the improvement in performance to the fact that features in invariant subspaces of ISA need not be strictly orthogonal.
7
Discussion
In this paper, we presented a novel soft reconstruction approach that enables the learning of overcomplete representations in ICA and TICA. We have also presented mathematical proofs that connect ICA with autoencoders and sparse coding. We showed that our algorithm works well even
without whitening; and that the reconstruction cost allows us to fix replicated filters in tiled convolutional neural networks. Our experiments show that RICA is fast and works well in practice. In
particular, we found our method to be 30-50x faster than overcomplete ICA with score matching.
Furthermore, our overcomplete features achieve state-of-the-art performance on the STL-10 and
Hollywood2 datasets.
4
Note that in ISA the square nonlinearity is used in the first layer, and squareroot is used in in second
layer [13].
8
References
[1] M.A. Ranzato, C. Poultney, S. Chopra, and Y. LeCun. Efficient learning of sparse representations with an
energy-based model. In NIPS, 2006.
[2] R. Raina, A. Battle, H. Lee, B. Packer, and A.Y. Ng. Self-taught learning: Transfer learning from unlabelled data. In ICML, 2007.
[3] M. Ranzato, F. J. Huang, Y. Boureau, and Y. LeCun. Unsupervised learning of invariant feature hierarchies
with applications to object recognition. In CVPR, 2007.
[4] J. Yang, K. Yu, Y. Gong, and T. Huang. Linear spatial pyramid matching using sparse coding for image
classification. In CVPR, 2009.
[5] J. Yang, K. Yu, and T. Huang. Efficient highly over-complete sparse coding using a mixture model. In
ECCV, 2010.
[6] A. Coates, H. Lee, and A. Y. Ng. An analysis of single-layer networks in unsupervised feature learning.
In AISTATS 14, 2011.
[7] K. Yu, Y. Lin, and J. Lafferty. Learning image representations from pixel level via hierarchical sparse
coding. In CVPR, 2011.
[8] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layerwise training of deep networks. In
NIPS, 2007.
[9] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006.
[10] B. Olshausen and D. Field. Emergence of simple-cell receptive field properties by learning a sparse code
for natural images. Nature, 1996.
[11] A. Hyv?arinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley Interscience, 2001.
[12] Q. V. Le, J. Ngiam, Z. Chen, D. Chia, P. W. Koh, and A. Y. Ng. Tiled convolutional neural networks. In
NIPS, 2010.
[13] Q. V. Le, W. Zou, S. Y. Yeung, and A. Y. Ng. Learning hierarchical spatio-temporal features for action
recognition with independent subspace analysis. In CVPR, 2011.
[14] Q. V. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, and A. Y. Ng. On optimization methods for deep
learning. In ICML, 2011.
[15] M. Marzalek, I. Laptev, and C. Schmid. Actions in context. In CVPR, 2009.
[16] A. Hyv?arinen, J. Hurri, and P. O. Hoyer. Natural Image Statistics. Springer, 2009.
[17] B. Olshausen and D. Field. Sparse coding with an overcomplete basis set: A strategy employed by v1.
Vision Research, 1997.
[18] M. S. Lewicki and T. J. Sejnowski. Learning overcomplete representations. Neural Computation, 2000.
[19] L. Ma and L. Zhang. Overcomplete topographic independent component analysis. Elsevier, 2008.
[20] M. Schmidt. minFunc, 2005.
[21] P. Vincent, H. Larochelle, Y. Bengio, and P. A. Manzagol. Extracting and composing robust features with
denoising autoencoders. In ICML, 2008.
[22] H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief net model for visual area V2. In NIPS, 2008.
[23] H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin. Exploring strategies for training deep neural
networks. JMLR, 2009.
[24] G. Hinton. A practical guide to training restricted boltzmann machines. Technical report, U. of Toronto,
2010.
[25] P. Vincent. A connection between score matching and denoising autoencoders. Neural Computation,
2010.
[26] A. Hyv?arinen. Estimation of non-normalized statistical models using score matching. JMLR, 2005.
[27] Y. Karklin and M.S. Lewicki. Is early vision optimized for extracting higher-order dependencies? In
NIPS, 2006.
[28] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local SVM approach. In ICPR,
2004.
[29] J. Liu, J. Luo, and M. Shah. Recognizing realistic actions from videos ?in the Wild?. In CVPR, 2009.
[30] Heng Wang, Muhammad Muneeb Ullah, Alexander Klaser, Ivan Laptev, and Cordelia Schmid. Evaluation
of local spatio-temporal features for action recognition. In BMVC, 2010.
9
| 4467 |@word seems:1 norm:6 hyv:3 covariance:5 liu:1 contains:1 score:10 ullah:1 luo:1 activation:6 scatter:1 additive:1 realistic:1 subsequent:1 shape:1 enables:2 remove:1 drop:2 plot:1 greedy:1 inspection:1 colored:1 node:6 location:12 toronto:1 zhang:1 mathematical:1 become:2 ik:1 consists:1 wild:1 interscience:1 manner:1 indeed:1 ica:82 frequently:1 examine:3 globally:2 relying:1 solver:3 becomes:1 begin:1 xx:1 underlying:1 provided:2 finding:1 transformation:2 temporal:2 quantitative:1 tie:1 exactly:4 k2:1 control:8 unit:3 producing:1 before:1 local:19 tends:3 limit:2 despite:1 encoding:3 becoming:1 approximately:4 ap:1 equivalence:1 challenging:4 range:2 practical:1 lecun:2 practice:2 implement:3 orthonormalization:12 backpropagation:1 optimizers:4 procedure:4 area:3 empirical:1 gabor:1 matching:11 pre:1 regular:3 refers:1 cannot:3 unlabeled:3 undesirable:1 interior:1 layered:1 put:1 context:2 applying:1 optimize:1 equivalent:15 conventional:2 map:2 attention:1 convex:1 orthonormal:2 lamblin:2 traditionally:1 target:1 hierarchy:1 exact:1 unshared:1 us:4 recognition:8 expensive:3 observed:2 wang:1 capture:1 wj:10 region:2 connected:3 ranzato:2 constrains:1 motivate:1 trained:1 solving:3 reviewing:1 laptev:3 efficiency:1 swap:2 basis:1 various:1 fast:5 shortcoming:1 describe:1 sejnowski:1 tell:1 outside:1 stanford:2 solve:1 supplementary:2 larger:1 cvpr:6 ability:1 statistic:2 topographic:2 emergence:1 noisy:4 advantage:1 kurtosis:1 net:3 reconstruction:45 propose:3 subtracting:1 karpenko:1 degenerate:6 achieve:4 frobenius:2 convergence:1 produce:4 object:4 tall:1 derive:1 andrew:1 gong:1 quocle:1 eq:3 solves:1 strong:1 c:1 larochelle:3 drawback:3 attribute:1 filter:14 subsequently:1 human:1 material:2 muhammad:1 explains:1 require:2 arinen:3 fix:1 mathematically:1 extension:1 cost2:1 strictly:1 hold:2 exploring:1 major:2 optimizer:1 achieves:2 early:1 purpose:1 estimation:1 sensitive:4 agrees:1 successfully:1 orthonormalized:2 always:1 rather:1 shelf:1 hj:1 conjunction:1 resuls:1 derived:1 focus:1 improvement:3 contrast:1 cg:3 elsevier:1 typically:4 entire:4 hidden:1 transformed:1 pixel:4 issue:1 classification:6 translational:1 orientation:4 art:5 constrained:4 spatial:3 field:20 having:1 ng:7 cordelia:1 identical:2 kw:10 broad:1 yu:3 unsupervised:7 k2f:2 icml:3 report:2 few:1 employ:1 randomly:1 oja:1 packer:1 divergence:1 consisting:1 connects:1 highly:4 evaluation:1 mixture:1 behind:1 swapping:1 edge:6 orthogonal:2 unless:1 overcomplete:41 fitted:1 minfunc:1 instance:3 column:6 soft:10 kurtotic:1 cover:2 disadvantage:1 retains:1 cost:32 introducing:1 ekanadham:1 subset:1 undercomplete:4 uninteresting:1 uniform:1 recognizing:2 fastica:2 osindero:1 encoders:6 connect:3 dependency:1 peak:2 sensitivity:1 stay:1 lee:3 off:1 decoding:2 together:2 quickly:1 satisfied:2 huang:3 nonlinearities:1 bfgs:4 tica:9 coding:22 summarized:1 x200:2 matter:1 explicitly:1 performed:3 red:3 competitive:1 start:1 complicated:1 contribution:1 minimize:9 square:1 il:1 accuracy:4 convolutional:7 efficiently:1 correspond:1 yield:1 raw:10 vincent:2 published:2 explain:1 detector:3 rbms:6 energy:1 frequency:5 proof:7 degeneracy:8 sampled:2 gain:1 dataset:10 color:5 dimensionality:1 improves:1 alexandre:1 higher:2 day:1 bmvc:1 formulation:7 though:1 strongly:1 furthermore:6 just:1 implicit:1 autoencoders:26 schuldt:1 hand:2 horizontal:2 x96:2 replacing:1 lahiri:1 nonlinear:1 overlapping:1 lack:1 reveal:1 gray:1 olshausen:2 effect:2 k22:6 normalized:1 orthonormality:23 true:1 counterpart:1 former:1 symmetric:2 white:2 visualizing:1 adjacent:1 self:1 klaser:1 complete:5 l1:8 orthogonalization:3 image:34 novel:2 recently:1 common:1 sigmoid:2 specialized:1 empirically:3 refer:1 tuning:1 unconstrained:6 outlined:1 pm:4 similarly:1 nonlinearity:1 longer:2 similarity:1 whitening:24 base:12 recent:1 showed:1 apart:1 certain:4 seen:1 additional:2 employed:4 desirable:1 isa:13 reduces:1 x10:1 smooth:4 technical:1 faster:3 unlabelled:2 plug:1 cross:2 chia:1 lin:1 variant:3 whitened:16 vision:2 yeung:1 iteration:1 pyramid:1 achieved:2 cell:1 addition:5 want:1 whereas:2 unlike:1 subject:4 tend:1 pooling:1 lafferty:1 extracting:2 chopra:1 yang:2 exceed:1 constraining:1 bengio:3 variety:1 affect:1 ivan:1 approaching:2 competing:1 regarding:1 pca:3 akin:1 penalty:18 suffer:1 hessian:1 pretraining:1 action:7 matlab:1 remark:1 deep:5 clear:1 ang:1 locally:2 cosh:1 visualized:1 outperform:1 exist:1 coates:5 zj:2 notice:2 diverse:2 taught:1 key:1 prevent:5 v1:1 enforced:3 run:1 patch:10 squareroot:1 jiquan:1 appendix:3 layer:4 copied:6 constraint:27 infinity:3 orthogonality:5 prochnow:1 untied:1 fourier:1 speed:5 layerwise:1 sphering:1 department:1 jngiam:1 icpr:1 ball:3 battle:1 across:3 slightly:1 modification:2 quoc:1 restricted:2 ik2f:3 invariant:2 koh:1 hollywood2:6 taken:1 pipeline:1 computationally:1 equation:1 previously:2 turn:1 mechanism:1 sgds:1 end:2 lieu:1 permit:1 apply:1 observe:1 hierarchical:2 v2:1 appropriate:1 alternative:1 schmidt:2 robustness:1 slower:2 shah:1 running:2 include:1 cf:1 objective:1 receptive:18 strategy:2 hoyer:1 gradient:2 kth:1 subspace:5 link:1 evenly:1 enforcing:3 code:1 copying:1 relationship:3 illustration:2 manzagol:1 minimizing:2 difficult:4 implementation:2 boltzmann:2 perform:2 teh:1 vertical:2 observation:1 datasets:3 descent:2 optional:2 extended:1 hinton:2 rn:1 arbitrary:1 required:1 kl:2 connection:6 optimized:2 learned:11 distinction:1 hour:1 nip:5 address:1 bar:1 below:1 sparsity:6 challenge:1 poultney:1 green:1 video:2 belief:2 difficulty:2 natural:5 hardwired:1 karklin:1 raina:1 carried:1 x8:1 auto:6 autoencoder:1 extract:1 schmid:2 popovici:1 l2:4 kf:1 fully:4 interesting:1 filtering:1 versus:1 validation:1 eigendecomposition:3 rica:37 suspected:1 principle:1 bank:1 heng:1 classifying:1 share:3 row:10 eccv:1 summary:1 copy:1 formal:4 bias:2 guide:1 wide:2 decorrelates:1 sparse:43 benefit:1 dimension:1 plain:1 gram:1 replicated:2 preprocessing:2 projected:2 approximate:6 ignore:2 preferred:1 global:5 reveals:2 hurri:1 spatio:2 discriminative:1 search:1 iterative:1 why:1 table:4 learn:19 terminate:1 robust:3 transfer:1 nature:1 composing:1 caputo:1 ngiam:3 zou:1 louradour:1 aistats:1 main:1 multilayered:1 motivation:1 arise:1 x16:2 slow:2 wiley:1 precision:1 explicit:1 tied:1 jmlr:2 learns:3 rk:1 symbol:1 decay:1 svm:1 stl:10 derives:1 karhunen:1 unwhitened:2 boureau:1 chen:1 entropy:1 likely:1 visual:2 prevents:4 partially:1 lewicki:2 kwj:1 springer:1 corresponds:1 ma:1 identity:3 replace:2 feasible:2 hard:9 change:4 included:1 specifically:2 except:1 youtube:1 denoising:6 principal:1 lemma:8 pas:1 invariance:2 tiled:5 formally:1 latter:1 alexander:1 evaluate:1 |
3,830 | 4,468 | Inferring spike-timing-dependent plasticity from
spike train data
Ian H. Stevenson and Konrad P. Kording
Department of Physical Medicine and Rehabilitation
Northwestern University
{i-stevenson, kk}@northwestern.edu
Abstract
Synaptic plasticity underlies learning and is thus central for development, memory, and recovery from injury. However, it is often difficult to detect changes in
synaptic strength in vivo, since intracellular recordings are experimentally challenging. Here we present two methods aimed at inferring changes in the coupling
between pairs of neurons from extracellularly recorded spike trains. First, using
a generalized bilinear model with Poisson output we estimate time-varying coupling assuming that all changes are spike-timing-dependent. This approach allows
model-based estimation of STDP modification functions from pairs of spike trains.
Then, using recursive point-process adaptive filtering methods we estimate more
general variation in coupling strength over time. Using simulations of neurons undergoing spike-timing dependent modification, we show that the true modification
function can be recovered. Using multi-electrode data from motor cortex we then
illustrate the use of this technique on in vivo data.
1
Introduction
One of the fundamental questions in computational neuroscience is how synapses are modified by
neural activity [1, 2]. A number of experimental results, using intracellular recordings in vitro, have
shown that synaptic plasticity depends on the precise pairing of pre- and post-synaptic spiking [3].
While such spike-timing-dependent plasticity (STDP) is thought to serve as a powerful regulatory
mechanism [4], measuring STDP in vivo using intracellular recordings is experimentally difficult
[5]. Here we instead attempt to estimate STDP in vivo by using simultaneously recorded extracellular spike trains and develop methods to estimate the time-varying strength of synapses.
In the past few years model-based methods have been developed that allow the estimation of coupling between pairs of neurons from spike train data [6, 7, 8, 9, 10, 11]. These methods have been
successfully applied to data from a variety of brain areas including retina [10], hippocampus [8], as
well as cortex [12]. While anatomical connections between pairs of extracellularly recorded neurons are generally not guaranteed, these phenomenological methods regularly improve encoding
accuracy and provide a statistical description of the functional coupling between neurons.
Here we present two techniques that extend these statistical methods to time-varying coupling between neurons and allow the estimation of spike-timing-dependent plasticity from spike trains. First
we introduce a generative model for time-varying coupling between neurons where the changes
in coupling strength depend on the relative timing of pre- and post-synaptic spikes: a bilinearnonlinear-Poisson model. We then present two approaches for inferring STDP modification functions from spike data. We test these methods on both simulated data and data recorded from the
motor cortex of a sleeping macaque monkey.
1
B
+
Nonlinearity
Predicted
Spiking
Post-Synaptic
Spikes
Post-Spike History
Coupling to Pre-Synaptic Neuron
x
Synaptic
Strength
Modification Function
Pre-Synaptic
Spikes
tpre - tpost
Pre
Post
log(?)
Synaptic Strength
A
200 ms
1.5
0.5
1 min
Figure 1: Generative model. A) A generative model of spikes where the coupling between neurons undergoes spike-timing dependent modification. Post-synaptic spiking is modeled as a doubly
stochastic Poisson process with a conditional intensity that depends on the neuron?s own history and
coupling to a pre-synaptic neuron. We consider the case where the strength of the coupling changes
over time, depending on the relative timing of pre- and post-synaptic spikes through a modification
function. B) As the synaptic strength changes over time, the influence of the pre-synaptic neuron on
the post-synaptic neuron changes. Insets illustrate two points in time where synaptic strength is low
(left) and high (right), respectively. Red lines illustrate the time-varying influence of the pre-synaptic
neuron, while the black lines denote the static influence.
2
Methods
Many studies have examined nonstationarity in neural systems, including for decoding [13], unitary
event detection [14], and assessing statistical dependencies between neurons [15]. Here we focus
specifically on non-stationarity in coupling between neurons due to spike-timing dependent modification of synapses. Our aim is to provide a framework for inferring spike-timing dependent modification functions from spike train data alone. We first present a generative model for spike trains
where neurons are undergoing STDP. We then present two methods for estimating spike-timing
dependent modification functions from spike train data: a direct method based on a time-varying
generalized linear model (GLM) and an indirect method based on point-process adaptive filtering.
2.1
A generative model for coupling with spike-timing dependent modification
While STDP has traditionally been modeled using integrate-and-fire neurons [4, 16], here we
model neurons undergoing STDP using a simple rate model of coupling between neurons, a linearnonlinear-Poisson (LNP) model. In our LNP model, the conditional intensity (instantaneous firing
rate) of a neuron is given by a linear combination of covariates passed through a nonlinearity. Here,
we assume that this nonlinearity is exponential, and the LNP reduces to generalized linear model
(GLM) with a canonical log link function.
The covariates driving variations in the neuron?s firing rate can depend on the past spiking history of
the neuron, the past spiking history of other neurons (coupling), as well as any external covariates
such as visual stimuli [10] or hand movement [12]. To model coupling from a pre-synaptic neuron
to a post-synaptic neuron, here we assume that the post-synaptic neuron?s firing is generated by
?
?
X
X
?(t | H t , ?, ?) = exp ??0 +
fi (npost (t ? ? : t)) ?i +
fj (npre (t ? ? : t))?j ?
i
j
npost (t) ? P oisson(?(t | H t , ?, ?)?t)
(1)
where ?(t | H t , ?, ?) is the conditional intensity of the post-synaptic neuron at time t, given a short
history of past spikes from the two neurons Ht and the model parameters. ?0 defines a baseline
firing rate, which is modulated by both the neuron?s own spike history from t?? to t, npost (t?? : t),
and the history of the pre-synaptic neuron npre (t ? ? : t) (together abbreviated as Ht ). Here we
have assumed that the post-spike history and coupling effects are mapped into a smooth basis by
a set of functions fi and then weighted by a set of post-spike coefficients ? and a set of coupling
2
coefficients ?. Finally, we assume that spikes npost (t) are generated by a Poisson random variable
with rate ?(t | H t , ?, ?)?t.
This model has been used extensively over the past few years to model coupling between neurons
[10, 12]. Details and extensions of this basic form have been previously published [6]. It is important
to note, however, that the parameters ? and ? can be easily estimated by maximizing the loglikelihood. Since the likelihood is log-concave [9], there is a single, global solution which can be
found quickly by a number of methods, such as iterative reweighted least squares (IRLS, used here).
Here we consider the case where the coupling strength can vary over time, and particularly as a
function of precise timing between pre- and post-synaptic spikes. To incorporate these spike-timing
dependent changes in coupling into the generative model we introduce a time-varying coupling
strength or ?synaptic weight? w(t)
?(t | X, ?, ?) = exp (?0 + X s (t)? + w(t)X c (t)?)
npost (t) ? P oisson(?(t | X, ?, ?)?t)
(2)
where w(t) changes based on the relative timing of pre- and post-synaptic spikes. Here, for simplicity, we have re-written the stable post-spike history and coupling terms in matrix form. The
vector X s (t) summarizes the post-spike history covariates at time t while X c (t) summarizes the
covariates related to the history of the pre-synaptic neuron. In this model, the synaptic weight w(t)
simply acts to scale the stable coupling defined by ?, and we update w(t) such that every pre-post
spike pair alters the synaptic weight independently following the second spike.
Under this model, the firing rate of the post-synaptic neuron is influenced by it?s own past spiking, as
well as the activity of a pre-synaptic neuron. A synaptic weight determining the strength of coupling
between the two neurons changes over time depending on the relative spike-timing (Fig 1A).
In the simulations that follow we consider three types of modification functions: 1) a traditional
double-exponential function that accurately models STDP found in cortical and hippocampal slices,
2) a mexican-hat type function that qualitatively matches STDP found in GABA-ergic neurons in
hippocampal cultures, and 3) a smoothed double-exponential function that has recently been demonstrated to stabilize weight distributions [17].
The double-exponential modification function is consistent with original STDP observations [2, 3]
and has been used extensively in simulated populations of integrate-and-fire neurons [4, 16]. In this
case each pair of pre- and post-synaptic spikes modifies the synapse by
?
?A+ exp tpre ?tpost
if tpre < tpost
?
+
?w(tpre ? tpost ) =
(3)
?A? exp ? tpre ?tpost
if tpre ? tpost
??
where tpre and tpost denote the relative spike times, and the parameters A+ , A? , ?+ , and ?? determine the magnitude and drop-off of each side of the double-exponential. This creates a sharp
boundary where the synapse is strengthened whenever pre-synaptic spikes appear to cause postsynaptic spikes and weakened when post-synaptic spikes do not immediately proceed pre-synaptic
spikes.
Similarly, mexican-hat type functions qualitatively match observations of STDP in GABA-ergic
neurons in hippocampal cultures [18] where
?(tpre ? tpost )2
?(tpre ? tpost )2
?w(tpre ? tpost ) = A+ exp
+ A? exp
(4)
2
2
2?+
2??
For ?? > ?+ this corresponds to a more general Hebbian rule, where synapses are strengthened
whenever pre- and post-synaptic spikes occur in close proximity. When spikes do not occur in close
proximity the synapse is weakened. In this case, the parameters A+ , A? , ?+ , and ?? determine
the magnitude and standard deviation of the positive and negative components of the modification
function.
Finally, we consider a smoothed double-exponential modification function that has recently been
shown to stabilize weight distributions. The sharp causal boundary in the classical doubleexponential tends to drive synaptic weights either towards a maximum or to zero. By adding noise
3
to tpre ? tpost , this causal boundary can be smoothed and weight distributions become stable [17].
Here we add Gaussian noise to (3) such that (tpre ? tpost )0 = (tpre ? tpost ) + , ? N (0, ? 2 ).
It is important to note that, unlike more common integrate-and-fire models of STDP, these modification function do not describe a change in the magnitude of post synaptic potentials (PSPs). Rather,
?w defines a change in the statistical influence of the pre-synaptic neuron on the post-synaptic
neuron. When w(t)X c (t)? is large, the post-synaptic neuron is more likely to fire following a presynaptic spike. However, in this bilinear form, w(t) is only uniquely defined up to a multiplicative
constant.
This generative model includes two distinct components: a GLM that defines the stationary firing
properties of the post-synaptic neuron and a modification function that defines how the coupling
between the pre- and post-synaptic neuron changes over time as a function of relative spike timing.
In simulating isolated pairs of neurons, each of the modification functions described above induces
large variations in the synaptic weight. For the sake of stable simulation we add an additional longtimescale forgetting factor that pushes the synaptic weights back to 1. Namely,
(
w(t) ? ?t
?f (w(t) ? 1) + ?w(tpre ? tpost ) if npre or npost = 1
w(t + ?t) =
(5)
otherwise
w(t) ? ?t
?f (w(t) ? 1)
where, here, we use ?f = 60s. The next sections describe two methods for estimating time-varying
synaptic strength as well as STDP modification functions from spike train data.
2.2
Point-process adaptive filtering of coupling strength
Several recent studies have examined the possibility that the tuning properties of neurons may drift
over time. In this context, techniques for estimating arbitrary changes in the parameters of LNP models have been especially useful. Point-process adaptive filtering is one such method which allows
accurate estimation of arbitrary time-varying parameters within LNP models and GLMs [19, 20].
The goal of this filtering approach is to update the model parameters at each time step, following spike observations, based on the instantaneous likelihood. Here we use this approach to track
variations in coupling strength between two neurons over time.
Details and a complete derivation of this model have been previously presented [20]. Briefly, the
basic recursive point-process adaptive filter follows a standard state-space modeling approach and
assumes that the model parameters in a GLM, such as (1), vary according to a random walk
? t+1 = F t ? t + ? t
(6)
where Ft denotes the transition matrix from one timestep to the next and ? t ? N (0, Qt ) denotes
Gaussian noise with covariance Qt . Given this state-space assumption, we can update the model
parameters ? given incoming spike observations. The prediction density at each timestep is given
by
? t|t?1 = F t ? t?1|t?1
W t|t?1 = F t W t?1|t?1 F Tt + Qt
(7)
where ? t?1|t?1 and W t?1|t?1 denote the estimated mean and covariance from the previous
timestep. Given a new spike count observation nt , we then integrate this prior information with
the likelihood to obtain the posterior. Here, for simplicity, we use a quadratic expansion of the
log-posterior (a Laplace approximation). When log ? is linear in the parameters, the conditional
intensity and posterior are given by
?t = exp X t ? t|t?1 + ct
?1
T
W ?1
t|t = W t|t?1 + X t [?t ?t]X t
h
i
? t|t = ? t|t?1 + W t|t X Tt (nt ? ?t ?t)
(8)
where X t denotes the covariates corresponding to the state-space variable, and ct describes variation
in log ? that is assumed to be stable over time. Here, the state-space variable is coupling strength,
4
and stable components of the model, such as post-spike history effects, are summarized with ct . The
initial values of ? and W can be estimated using a short training period before filtering. The only
free parameters are those describing the state-space: F and Q. In the analysis that follows we will
reduce the problem to a single dimension, where the shape of coupling is fixed during training, and
we apply the point-process adaptive filter to a single coefficient for the covariate X 0 (t) = Xc (t)?.
Together, (7) and (8) allow us to track changes in the model parameters over time. Given an estimate
of the time-varying synaptic weight w(t),
?
we can then estimate the modification function ?w(t
? pre ?
tpost ) by correlating the estimated changes in w(t)
? with the relative spike timings that we observe.
2.3
Inferring STDP with a nonparametric, generalized bilinear model
Point-process adaptive filtering allows us to track noisy changes in coupling strength over time.
However, it does not explicitly model the fact that these changes may be spike-timing dependent.
In this section we introduce a method to directly infer modification functions from spike train data.
Specifically, we model the modification function non-parametrically by generating covariates W
that depend on the relative spike timing. This non-parametric approximation to the modification
gives a generalized bilinear model (GBLM).
?(t | X, W , ?, ?, ? w ) = exp ?0 + X s (t)? + ? Tw W T (t)X c (t)?
npost (t) ? P oisson(?(t | X, W , ?, ?, ? w )?t)
(9)
where ? w describes the modification function and W (t)? w approximates w(t). Each of the K
STDP covariates, W k , describes the cumulative effect of spike pairs tpre ? tpost within a specific
range [Tk? , Tk+ ],
Wk (t + ?t) = Wk (t) ?
?t
(Wk (t) ? 1) + 1(tpre ? tpost ? [Tk? , Tk+ ])
?f
(10)
such that, together, W (t)? w captures the time-varying coupling due to pre-post spike pairs within a
given window (i.e. -100 to 100ms). To model any decay in STDP over time, we, again, allow these
covariates to decay exponentially with ?f .
In this form, maximum likelihood estimation along each axis is a log-concave optimization problem
[21]. The parameters describing the modification function ? w and the parameters describing the
stable parts of the model ? and ? can be estimated by holding one set of parameters fixed while
updating the other and alternating between the two optimizations. In practice, convergence is relatively fast, with the deviance changing by < 0.1% within 3 iterations (Fig 3A), and, empirically,
using random restarts, we find that the solutions tend to be stable. In addition to estimates of the
post-spike history and coupling filters, the GBLM thus provides a non-parametric approximation
to the modification function and explicitly accounts for spike-timing dependent modification of the
coupling strength.
3
Results
To examine the accuracy and convergence properties of the two inference methods presented above,
we sampled spike trains from the generative model with various parameters. We simulated a presynaptic neuron as a homogeneous Poisson process with a firing rate of 5Hz, and the post-synaptic
neuron as a conditionally Poisson process with a baseline firing rate of 5Hz. Through the GBLM,
the post-synaptic neuron?s firing rate is affected by its own post-spike history as well as the activity
of the pre-synaptic neuron (modeled using 5 raised cosine basis functions [10]). However, as STDP
occurs the strength of coupling between the neurons changes according to one of three modification
functions: a double-exponential, a mexican-hat, or a smoothed double-exponential (Fig 2).
We find that both point-process adaptive filtering and the generalized bilinear model are able to accurately reconstruct the time-varying synaptic weight for each type of modification function (Fig 2,
left). However, adaptive filtering generally provides a much less accurate estimate of the underlying
modification function than the GBLM (Fig 2, center). Since the adaptive filter only updates the
5
Synaptic Strength
A
Simulated
AF
Adaptive Filter
5 x 10
GBLM
?3
GBLM
1
5 x 10
0
2
0
?3
2
0
1
?5
?5
1
1
0
B2
4 x 10
?3
2
?2
0
C2
1
1
0
30
60
0
?2
5 x 10 ?3
1
5
0
0
2
0
x 10
?3
?5
?5
0
?3
2
0
2
0
1
4 x 10
-100
Time [min]
0
tpre-tpost [ms]
100
1
0
100 -100
Time [ms]
0
tpre-tpost [ms]
100
Figure 2: Recovering simulated STDP. Spikes were simulated from two neurons whose coupling
varied over time, depending on the relative timing of pre- and post-synaptic spikes. Using two
distinct methods (point-process adaptive filtering and the GBLM) we estimated the time-varying
coupling strength and modification function from simulated spike train data. Results are shown
for three different modification functions A) double-exponential, B) Mexican-hat, and C) smoothed
double-exponential. Black lines denote true values, red lines denote estimates from adaptive filtering, and blue lines denote estimates from the GBLM. The post-spike history and coupling terms are
shown at left for the GBLM as multiplicative gains exp(?). Error bars denote standard errors for the
post-spike and coupling filters and 95% confidence intervals for the modification function estimates.
synaptic weight following the observations nt , this is not entirely unsurprising. Changes in coupling
strength are only detected by the filter after they have occurred and become evident in the spiking
of the post-synaptic neuron. In contrast to the GBLM, there is a substantial delay between changes
in the true synaptic weight and those estimated by the adaptive filter. In this case, we find that the
accuracy of the adaptive filter follows changes in the synaptic weight approximately exponentially
with ? ? 25ms (Fig 3B).
An important question for the practical application of these methods is how much data is necessary to
detect and accurately estimate modification functions for various effect sizes. Since the size of spiketiming dependent changes may be small in vivo, it is essential that we know under which conditions
modification functions can be recovered. Here we simulated the standard double-exponential STDP
model with several different effect-sizes, modifying A+ and A? and examining the estimation error
in both w(t)
?
and ?w(t
? pre ? tpost ) (Fig 3). The three different effect-sizes simulated here used
coupling kernels similar to Fig 2A and began with w(t) = 1. After spike simulation the standard
deviation in w(t) was 0.060?0.002 for the small effect size, 0.13?0.01 for the medium effect size,
and 0.27?0.01 for the large effect size. For all effect sizes, we found that with small amounts of data
(< 1000 s), the GBLM tends to over-fit the data. In these situations Adaptive Filtering reconstructs
both the synaptic weight (Fig 3E) and modification function (Fig 3F) more accurately than the
GBLM (Fig 3C,E). However, once enough data is available maximum likelihood estimation of the
GBLM out-performs both the stable coupling model and adaptive filtering. The extent of over-fitting
can be assessed by the cross-validated log likelihood ratio relative to the homogeneous Poisson
process (Fig 3G, shown in log2 for 2-fold cross-validation). Here, the stable coupling model has an
average cross-validated log likelihood ratio relative to a homogeneous Poisson process of 0.185?
0.004 bits/spike across all effect sizes. Even in this controlled simulation the contribution of timevarying coupling is relatively small. Both the GBLM and Adaptive Filtering only increase the log
likelihood relative to a homogeneous Poisson process by 3-4% for the parameters used here at the
largest recording length.
6
E1
Adaptive Filter
Synaptic Weight
?6
B
2
3
4
Iterations
Estimation Delay
2
-2
0
5
0
3
10
D1
F1
Modification
Function
?~25 ms
-6
0.1
3
3
10
10
Recording Length [s]
Modification
Function
Correlation
1
Correlation
10
Over-Fitting
0.2
-10 x 10?4
?50
0
50
100
?t [ms]
0
150
0
3
10
Effect Sizes
?4
10
G
?w=0.27
?w=0.13
?w=0.06
Models
?2
Correlation
0
10
10
Cross-Correlation
GBLM
Synaptic Weight
Correlation
Relative Deviance
C1
Convergence
2
Log Likelihood [bits/s]
A 10
Post-Spike History
PSH+Coupling
GBLM
Adaptive Filter
3
10
Recording Length [s]
Recording Length [s]
Figure 3: Estimation errors for simulated STDP. A) Convergence of the joint optimization problem
for three different effect sizes. Filled circles denote updates of the stable coupling terms. Open
circles denote updates of the modification function terms. Note that after 3 iterations the deviance
is changing by < 0.1% and the model has (essentially) converged. B) Cross-correlation between
changes in the true synaptic weight and estimated weight for the GBLM and Adaptive Filter. Note
that Adaptive Filtering fails to predict weight changes as they occur. Error bars denote SEM across
N=10 simulations at the largest effect size. C,D) Correlation between the simulated and estimated
synaptic weight (C) and modification function (D) for the GBLM as a function of the recording
length. E,F) Correlation between the simulated and estimated synaptic weight and modification
function for Adaptive Filtering. Error bars denote SEM across N=40 simulations for each effect
size. G) Cross-validated (2-fold) log likelihood relative to a homogeneous Poisson process for the
GBLM and Adaptive Filtering models. The GBLM (blue) over-fits for small amounts of data, but
eventually out-performs both the stable coupling model (gray) and Adaptive Filtering (red). Error
bands denote SEM across N=120 simulations, all effect sizes.
0.12
0.08
Fil
te
r
0.06
0.06
0.04
0.04
0.02
0.02
0
0
?0.02
?0.02
-100
0
100
tpre-tpost [ms]
D
Adaptive Filter
-100
0.02
0.01
0
?0.01
0
tpre-tpost [ms]
100
0
0.1
0.2
0.3
LPSH+Coup - LPSH [bits/spike]
pt
ive
C
GBLM
Ad
a
H+
PS
GB
LM
0.04
PS
H
Co
up
in
g
Log-Likelihood [bits/s]
*
0
B
Aveverage Modification Function [AU]
Model Comparison
LGBLM- LPSH+Coup [bits/spike]
A
Figure 4: Results for data from monkey motor cortex. A) Log likelihood relative to a homogeneous
Poisson process for each of four models: a stable GLM with only post-spike history (PSH), a stable
GLM with PSH and coupling, the GBLM, and the Adaptive Filter. Bars and error bars denote
median and inter-quartile range. * denotes significance under a paired t-test, p<0.05. B) The average
modification function estimated under the GBLM for N=75 pairs of neurons. C) The modification
function estimated from adaptive filtering for the same data. In both cases there does not appear to be
a strong, stereotypically shaped modification function. D) The degree to which adding nonstationary
coupling improves model accuracy does not appear to be related to coupling strength as measured
by how much the PSH+Coupling model improves model accuracy over the PSH model.
7
Finally, to test these methods on actual neural recordings, we examined multi-electrode recordings
from the motor cortex of a sleeping macaque monkey. The experimental details of this task have
been previously published [22]. Approximately 180 minutes of data from 83 neurons were collected
(after spike sorting) during REM and NREM sleep.
In the simulations above we assumed that the forgetting factor ?f was known. For the GBLM ?f
determines the timescale of the spike-timing dependent covariates X w , while for adaptive filtering
?f defines the transition matrix F . In the analysis that follows we make the simplifying assumption
that the forgetting factor is fixed at ?f = 60s. Additionally, during adaptive filtering we fit the
variance of the process noise Q by maximizing the cross-validated log-likelihood.
Analyzing the most strongly correlated 75 pairs of neurons during the 180 minute recording (2-fold
cross-validation) we find that the GBLM and Adaptive Filtering both increase model accuracy (Fig
4A). However, the resulting modification functions do not show any of the structure previously seen
in intracellular experiments. In both individual pairs and the average across pairs (Fig 4B,C) the
modification functions are noisy and generally not significantly different from zero. Additionally,
we find that the increase in model accuracy provided by adding non-stationary coupling to the traditional, stable coupling GLM does not appear to be correlated with the strength of coupling itself.
These results suggest that STDP may be difficult to detect in vivo, requiring even longer recordings
or, possibly, different electrode configurations. Particularly, with the electrode array used here (Utah
array, 400 ?m electrode spacing), neurons are unlikely to be mono-synaptically connected.
4
Discussion
Here we have presented two methods for estimating spike-timing dependent modification functions
from multiple spike train data: an indirect method based on point-process adaptive filtering and a
direct method using a generalized bilinear model. We have shown that each of these methods is able
to accurately reconstruct both ongoing fluctuations in synaptic weight and modification functions in
simulation. However, there are several reasons that detecting similar STDP in vivo may be difficult.
In vivo, pairs of neurons do not act in isolation. Rather, each neuron receives input from thousands of
other neurons, inputs which may confound estimation of the coupling between a given pair. It would
be relatively straightforward to include multiple pre-synaptic neurons in the model using either
stable coupling [6, 10] or time-varying, spike-timing dependent coupling. Additionally, unobserved
common input or external covariates, such as hand position, could also be included in the model.
These extra covariates should further improve spike prediction accuracy, and could, potentially,
result in better estimation of STDP modification functions.
Despite these caveats the statistical description of time-varying coupling presented here shows
promise. Although the neurons in vivo are not guaranteed to be anatomically connected and estimated coupling must be always be interpreted cautiously [11], including synaptic modification
terms does improve model accuracy on in vivo data. Several experimental studies have even suggested that understanding plasticity may not require well-isolated pairs of neurons. The effects of
STDP may be visible through poly-synaptic potentiation [23, 24, 25]. In analyzing real data our
ability to detect STDP may vary widely across experimental preparations. For instance, recordings
from hippocampal slice or dissociated neuronal cultures may reveal substantially more plasticity
than in vivo cortical recordings and are less likely to be confounded by unobserved common-input.
There are a number of extensions to the basic Adaptive Filtering and GBLM frameworks that may
yield more accurate estimation and more biophysically realistic models of STDP. The over-fitting
observed in the GBLM could be reduced by regularizing the modification function, and Adaptive
Smoothing (using both forward and backward updates) will likely out-perform Adaptive Filtering
as used here. By changing the functional form of the covariates included in the GBLM we may be
able to distinguish between standard models of STDP where spike pairs are treated independently
and other models such as those with self-normalization [16] or where spike triplets are considered
[26]. Ultimately, the framework presented here extends recent GLM-based approaches to modeling
coupling between neurons to allow for time-varying coupling between neurons and, particularly,
changes in coupling related to spike-timing dependent plasticity. Although it may be difficult to
resolve the small effects of STDP in vivo, both improvements in recording techniques and statistical
methods promise to make the observation of these ongoing changes possible.
8
References
[1] LF Abbott and SB Nelson. Synaptic plasticity: taming the beast. Nature Neuroscience, 3:1178?1183,
2000.
[2] G Bi and M Poo. Synaptic modification by correlated activity: Hebb?s postulate revisited. Annual Review
of Neuroscience, 24(1):139?166, 2001.
[3] H Markram, J Lubke, M Frotscher, and B Sakmann. Regulation of synaptic efficacy by coincidence of
postsynaptic aps and epsps. Science, 275(5297):213?215, 1997.
[4] S Song, KD Miller, and LF Abbott. Competitive hebbian learning through spike-timing-dependent synaptic plasticity. Nature Neuroscience, 3(9):919?926, 2000.
[5] V Jacob, DJ Brasier, I Erchova, D Feldman, and D.E Shulz. Spike timing-dependent synaptic depression
in the in vivo barrel cortex of the rat. The Journal of Neuroscience, 27(6):1271, 2007.
[6] Z Chen, D Putrino, S Ghosh, R Barbieri, and E Brown. Statistical inference for assessing functional connectivity of neuronal ensembles with sparse spiking data. Neural Systems and Rehabilitation Engineering,
IEEE Transactions on, (99):1?1, 2010.
[7] S Gerwinn, JH Macke, M Seeger, and M Bethge. Bayesian inference for spiking neuron models with a
sparsity prior. Advances in Neural Information Processing Systems, 20, 2007.
[8] M Okatan, MA Wilson, and EN Brown. Analyzing functional connectivity using a network likelihood
model of ensemble neural spiking activity. Neural Computation, 17(9):1927?1961, 2005.
[9] L Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network:
Computation in Neural Systems, 15:243?262, 2004.
[10] JW Pillow, J Shlens, L Paninski, A Sher, AM Litke, EJ Chichilnisky, and EP Simoncelli. Spatio-temporal
correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995?999, 2008.
[11] IH Stevenson, JM Rebesco, LE Miller, and KP Kording. Inferring functional connections between neurons. Current Opinion in Neurobiology, 18(6):582?588, 2008.
[12] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework
for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects.
Journal of Neurophysiology, 93(2):1074?1089, 2005.
[13] W Wu and NG Hatsopoulos. Real-time decoding of nonstationary neural activity in motor cortex. Neural
Systems and Rehabilitation Engineering, IEEE Transactions on, 16(3):213?222, 2008.
[14] S Grun, M Diesmann, and A Aertsen. Unitary events in multiple single-neuron spiking activity: Ii.
nonstationary data. Neural Computation, 14(1):81?119, 2002.
[15] V. Ventura, C. Cai, and R. E. Kass. Statistical assessment of time-varying dependency between two
neurons. Journal of Neurophysiology, 94(4):2940?2947, 2005.
[16] MCW Van Rossum, GQ Bi, and GG Turrigiano. Stable hebbian learning from spike timing-dependent
plasticity. Journal of Neuroscience, 20(23):8812, 2000.
[17] B Babadi and LF Abbott. Intrinsic stability of temporally shifted spike-timing dependent plasticity. PLoS
Comput Biol, 6(11):e1000961, 2010.
[18] M.A. Woodin, K. Ganguly, and M. Poo. Coincident pre-and postsynaptic activity modifies gabaergic
synapses by postsynaptic changes in cl-transporter activity. Neuron, 39(5):807?820, 2003.
[19] EN Brown, DP Nguyen, LM Frank, MA Wilson, V Solo, and A Sydney. An analysis of neural receptive
field dynamics by point process adaptive filtering. Proc Natl Acad Sci USA, 98:12261?12266, 2001.
[20] UT Eden, LM Frank, R Barbieri, V Solo, and EN Brown. Dynamic analysis of neural encoding by point
process adaptive filtering. Neural Computation, 16(5):971?998, 2004.
[21] MB Ahrens, L Paninski, and M Sahani. Inferring input nonlinearities in neural encoding models. Network:
Computation in Neural Systems, 19(1):35?67, 2008.
[22] N Hatsopoulos, J Joshi, and JG O?Leary. Decoding continuous and discrete motor behaviors using motor
and premotor cortical ensembles. Journal of Neurophysiology, 92(2):1165?1174, 2004.
[23] G Bi and M Poo. Distributed synaptic modification in neural networks induced by patterned stimulation.
Nature, 401(6755):792?795, 1999.
[24] A Jackson, J Mavoori, and EE Fetz. Long-term motor cortex plasticity induced by an electronic neural
implant. Nature, 444(7115):56?60, 2006.
[25] JM Rebesco, IH Stevenson, K Kording, SA Solla, and LE Miller. Rewiring neural interactions by microstimulation. Frontiers in Systems Neuroscience, 4:12, 2010.
[26] RC Froemke and Y Dan. Spike-timing-dependent synaptic modification induced by natural spike trains.
Nature, 416(6879):433?438, 2002.
9
| 4468 |@word neurophysiology:3 briefly:1 hippocampus:1 open:1 simulation:10 covariance:2 simplifying:1 jacob:1 initial:1 configuration:1 npost:7 efficacy:1 past:6 recovered:2 current:1 nt:3 ka:1 written:1 must:1 realistic:1 visible:1 plasticity:13 shape:1 motor:8 drop:1 update:7 aps:1 alone:1 generative:8 stationary:2 signalling:1 short:2 npre:3 caveat:1 provides:2 detecting:1 revisited:1 rc:1 along:1 c2:1 direct:2 become:2 pairing:1 doubly:1 fitting:3 dan:1 introduce:3 inter:1 forgetting:3 behavior:1 examine:1 multi:2 brain:1 rem:1 resolve:1 actual:1 jm:2 window:1 provided:1 estimating:4 underlying:1 medium:1 barrel:1 interpreted:1 substantially:1 monkey:3 developed:1 unobserved:2 ghosh:1 temporal:1 fellow:1 every:1 act:2 concave:2 appear:4 rossum:1 okatan:1 positive:1 before:1 engineering:2 timing:31 tends:2 acad:1 bilinear:6 encoding:4 despite:1 analyzing:3 barbieri:2 firing:9 fluctuation:1 approximately:2 black:2 au:1 weakened:2 examined:3 challenging:1 co:1 patterned:1 range:2 bi:3 practical:1 recursive:2 practice:1 lf:3 area:1 thought:1 significantly:1 cascade:1 pre:29 deviance:3 confidence:1 suggest:1 close:2 context:1 influence:4 demonstrated:1 center:1 maximizing:2 modifies:2 straightforward:1 poo:3 independently:2 simplicity:2 recovery:1 immediately:1 microstimulation:1 rule:1 array:2 d1:1 shlens:1 jackson:1 population:2 stability:1 variation:5 traditionally:1 laplace:1 pt:1 homogeneous:6 particularly:3 updating:1 observed:1 ft:1 ep:1 coincidence:1 capture:1 thousand:1 connected:2 plo:1 movement:1 solla:1 cautiously:1 hatsopoulos:2 substantial:1 covariates:13 dynamic:2 ultimately:1 depend:3 serve:1 creates:1 basis:2 easily:1 joint:1 indirect:2 various:2 derivation:1 train:15 distinct:2 fast:1 describe:2 kp:1 detected:1 whose:1 premotor:1 widely:1 ive:1 loglikelihood:1 otherwise:1 reconstruct:2 ability:1 ganguly:1 nrem:1 timescale:1 noisy:2 itself:1 turrigiano:1 cai:1 rewiring:1 interaction:1 gq:1 mb:1 description:2 convergence:4 electrode:5 double:10 assessing:2 psh:5 p:2 generating:1 tk:4 coupling:63 illustrate:3 develop:1 depending:3 measured:1 qt:3 sa:1 strong:1 epsps:1 recovering:1 predicted:1 sydney:1 filter:14 stochastic:1 modifying:1 quartile:1 oisson:3 opinion:1 require:1 truccolo:1 potentiation:1 f1:1 extension:2 frontier:1 proximity:2 fil:1 considered:1 stdp:29 exp:9 predict:1 lm:3 driving:1 vary:3 estimation:13 proc:1 largest:2 successfully:1 weighted:1 gaussian:2 always:1 aim:1 modified:1 rather:2 ej:1 varying:17 timevarying:1 wilson:2 validated:4 focus:1 improvement:1 likelihood:15 contrast:1 seeger:1 litke:1 baseline:2 detect:4 am:1 inference:3 dependent:23 sb:1 unlikely:1 development:1 raised:1 smoothing:1 frotscher:1 field:1 once:1 shaped:1 ng:1 stimulus:1 few:2 retina:1 shulz:1 simultaneously:1 individual:1 fire:4 attempt:1 detection:1 stationarity:1 possibility:1 natl:1 ergic:2 accurate:3 solo:2 necessary:1 culture:3 filled:1 walk:1 re:1 circle:2 causal:2 isolated:2 instance:1 modeling:2 injury:1 measuring:1 deviation:2 parametrically:1 delay:2 examining:1 unsurprising:1 dependency:2 density:1 fundamental:1 off:1 decoding:3 together:3 quickly:1 bethge:1 connectivity:2 again:1 central:1 recorded:4 postulate:1 reconstructs:1 leary:1 possibly:1 external:2 macke:1 account:1 stevenson:4 potential:1 nonlinearities:1 summarized:1 stabilize:2 includes:1 coefficient:3 wk:3 b2:1 explicitly:2 depends:2 ad:1 multiplicative:2 extracellularly:2 red:3 competitive:1 vivo:13 contribution:1 square:1 accuracy:9 lubke:1 variance:1 miller:3 yield:1 ensemble:4 biophysically:1 bayesian:1 accurately:5 irls:1 drive:1 published:2 history:18 converged:1 synapsis:5 influenced:1 nonstationarity:1 synaptic:75 whenever:2 coup:2 static:1 sampled:1 gain:1 ut:1 improves:2 back:1 follow:1 restarts:1 synapse:3 jw:1 strongly:1 correlation:9 glms:1 hand:2 receives:1 assessment:1 defines:5 undergoes:1 gray:1 reveal:1 utah:1 effect:19 usa:1 requiring:1 true:4 brown:5 alternating:1 reweighted:1 conditionally:1 konrad:1 during:4 self:1 uniquely:1 cosine:1 rat:1 m:10 generalized:7 hippocampal:4 gg:1 evident:1 complete:2 tt:2 performs:2 fj:1 instantaneous:2 fi:2 recently:2 began:1 common:3 functional:5 spiking:13 physical:1 vitro:1 empirically:1 stimulation:1 exponentially:2 extend:1 occurred:1 approximates:1 relating:1 feldman:1 tuning:1 similarly:1 nonlinearity:3 dj:1 phenomenological:1 jg:1 stable:17 cortex:8 longer:1 add:2 posterior:3 own:4 recent:2 gerwinn:1 lnp:5 seen:1 additional:1 determine:2 period:1 ii:1 multiple:3 simoncelli:1 reduces:1 infer:1 hebbian:3 smooth:1 match:2 af:1 cross:8 long:1 post:39 e1:1 paired:1 controlled:1 prediction:2 underlies:1 basic:3 essentially:1 poisson:12 iteration:3 kernel:1 normalization:1 synaptically:1 sleeping:2 c1:1 addition:1 linearnonlinear:1 spacing:1 interval:1 median:1 extra:1 unlike:1 recording:15 tend:1 hz:2 induced:3 regularly:1 nonstationary:3 unitary:2 joshi:1 ee:1 enough:1 transporter:1 variety:1 psps:1 fit:3 isolation:1 reduce:1 donoghue:1 gb:1 passed:1 song:1 proceed:1 cause:1 depression:1 generally:3 useful:1 aimed:1 amount:2 nonparametric:1 extensively:2 band:1 induces:1 reduced:1 canonical:1 alters:1 shifted:1 ahrens:1 neuroscience:7 estimated:13 track:3 extrinsic:1 anatomical:1 blue:2 discrete:1 promise:2 affected:1 four:1 eden:2 mono:1 changing:3 abbott:3 ht:2 backward:1 timestep:3 year:2 powerful:1 extends:1 wu:1 electronic:1 summarizes:2 bit:5 entirely:1 ct:3 guaranteed:2 distinguish:1 fold:3 quadratic:1 sleep:1 babadi:1 annual:1 activity:10 strength:24 occur:3 sake:1 diesmann:1 min:2 extracellular:1 relatively:3 department:1 according:2 combination:1 gaba:2 kd:1 describes:3 across:6 postsynaptic:4 beast:1 tw:1 rehabilitation:3 modification:55 anatomically:1 confound:1 glm:8 previously:4 abbreviated:1 count:1 mechanism:1 describing:3 eventually:1 know:1 confounded:1 available:1 apply:1 observe:1 simulating:1 hat:4 original:1 assumes:1 denotes:4 include:1 log2:1 xc:1 medicine:1 spiketiming:1 rebesco:2 especially:1 classical:1 question:2 spike:90 occurs:1 parametric:2 receptive:1 traditional:2 aertsen:1 dp:1 link:1 mapped:1 simulated:12 sci:1 nelson:1 presynaptic:2 extent:1 collected:1 reason:1 assuming:1 length:5 modeled:3 kk:1 ratio:2 difficult:5 regulation:1 ventura:1 potentially:1 holding:1 frank:2 negative:1 sakmann:1 perform:1 neuron:71 observation:7 coincident:1 situation:1 neurobiology:1 precise:2 varied:1 smoothed:5 sharp:2 arbitrary:2 intensity:4 drift:1 pair:17 namely:1 chichilnisky:1 connection:2 macaque:2 tpre:20 able:3 bar:5 suggested:1 sparsity:1 including:3 memory:1 event:2 treated:1 natural:1 improve:3 temporally:1 axis:1 gabaergic:1 sher:1 sahani:1 taming:1 prior:2 understanding:1 review:1 determining:1 relative:15 northwestern:2 filtering:27 validation:2 integrate:4 degree:1 consistent:1 free:1 side:1 allow:5 jh:1 fetz:1 markram:1 sparse:1 van:1 slice:2 boundary:3 dimension:1 cortical:3 transition:2 cumulative:1 pillow:1 distributed:1 forward:1 tpost:22 adaptive:37 qualitatively:2 nguyen:1 transaction:2 kording:3 global:1 correlating:1 incoming:1 assumed:3 spatio:1 continuous:1 regulatory:1 iterative:1 triplet:1 additionally:3 nature:6 sem:3 dissociated:1 expansion:1 poly:1 cl:1 froemke:1 significance:1 intracellular:4 doubleexponential:1 noise:4 neuronal:3 fig:14 en:3 strengthened:2 hebb:1 fails:1 inferring:7 position:1 exponential:11 comput:1 ian:1 minute:2 specific:1 covariate:2 inset:1 undergoing:3 decay:2 essential:1 intrinsic:1 ih:2 adding:3 magnitude:3 te:1 implant:1 push:1 sorting:1 chen:1 simply:1 likely:3 paninski:3 visual:2 corresponds:1 putrino:1 determines:1 ma:2 conditional:4 goal:1 towards:1 change:28 experimentally:2 included:2 specifically:2 mexican:4 experimental:4 modulated:1 assessed:1 preparation:1 ongoing:2 incorporate:1 regularizing:1 biol:1 correlated:3 |
3,831 | 4,469 | Ranking annotators for crowdsourced labeling tasks
Shipeng Yu
Siemens Healthcare, Malvern, PA, USA
[email protected]
Vikas C. Raykar
Siemens Healthcare, Malvern, PA, USA
[email protected]
Abstract
With the advent of crowdsourcing services it has become quite cheap and reasonably effective to get a dataset labeled by multiple annotators in a short amount of
time. Various methods have been proposed to estimate the consensus labels by
correcting for the bias of annotators with different kinds of expertise. Often we
have low quality annotators or spammers?annotators who assign labels randomly
(e.g., without actually looking at the instance). Spammers can make the cost of
acquiring labels very expensive and can potentially degrade the quality of the consensus labels. In this paper we formalize the notion of a spammer and define
a score which can be used to rank the annotators?with the spammers having a
score close to zero and the good annotators having a high score close to one.
1 Spammers in crowdsourced labeling tasks
Annotating an unlabeled dataset is one of the bottlenecks in using supervised learning to build good
predictive models. Getting a dataset labeled by experts can be expensive and time consuming. With
the advent of crowdsourcing services (Amazon?s Mechanical Turk being a prime example) it has
become quite easy and inexpensive to acquire labels from a large number of annotators in a short
amount of time (see [8], [10], and [11] for some computer vision and natural language processing
case studies). One drawback of most crowdsourcing services is that we do not have tight control
over the quality of the annotators. The annotators can come from a diverse pool including genuine
experts, novices, biased annotators, malicious annotators, and spammers. Hence in order to get good
quality labels requestors typically get each instance labeled by multiple annotators and these multiple
annotations are then consolidated either using a simple majority voting or more sophisticated methods that model and correct for the annotator biases [3, 9, 6, 7, 14] and/or task complexity [2, 13, 12].
In this paper we are interested in ranking annotators based on how spammer like each annotator is.
In our context a spammer is a low quality annotator who assigns random labels (maybe because the
annotator does not understand the labeling criteria, does not look at the instances when labeling, or
maybe a bot pretending to be a human annotator). Spammers can significantly increase the cost of
acquiring annotations (since they need to be paid) and at the same time decrease the accuracy of the
final consensus labels. A mechanism to detect and eliminate spammers is a desirable feature for any
crowdsourcing market place. For example one can give monetary bonuses to good annotators and
deny payments to spammers.
The main contribution of this paper is to formalize the notion of a spammer for binary, categorical,
and ordinal labeling tasks. More specifically we define a scalar metric which can be used to rank the
annotators?with the spammers having a score close to zero and the good annotators having a score
close to one (see Figure 4). We summarize the multiple parameters corresponding to each annotator
into a single score indicative of how spammer like the annotator is. While this spammer score was
implicit for binary labels in earlier works [3, 9, 2, 6] the extension to categorical and ordinal labels is
novel and is quite different from the accuracy computed from the confusion rate matrix. An attempt
to quantify the quality of the workers based on the confusion matrix was recently made by [4] where
they transformed the observed labels into posterior soft labels based on the estimated confusion
1
matrix. While we obtain somewhat similar annotator rankings, we differ from this work in that our
score is directly defined in terms of the annotator parameters (see ? 5 for more details).
The rest of the paper is organized as follows. For ease of exposition we start with binary labels
(? 2) and later extend it to categorical (? 3) and ordinal labels (? 4). We first specify the annotator
model used, formalize the notion of a spammer, and propose an appropriate score in terms of the
annotator model parameters. We do not dwell too much on the estimation of the annotator model
parameters. These parameters can either be estimated directly using known gold standard 1 or the
iterative algorithms that estimate the annotator model parameters without actually knowing the gold
standard [3, 9, 2, 6, 7]. In the experimental section (? 6) we obtain rankings for the annotators using
the proposed spammer scores on some publicly available data from different domains.
2 Spammer score for crowdsourced binary labels
Annotator model Let yij ? {0, 1} be the label assigned to the ith instance by the j th annotator, and
let yi ? {0, 1} be the actual (unobserved) binary label. We model the accuracy of the annotator
separately on the positive and the negative examples. If the true label is one, the sensitivity (true
positive rate) ?j for the j th annotator is defined as the probability that the annotator labels it as one.
?j := Pr[yij = 1|yi = 1].
On the other hand, if the true label is zero, the specificity (1?false positive rate) ? j is defined as the
probability that annotator labels it as zero.
? j := Pr[yij = 0|yi = 0].
Extensions of this basic model have been proposed to include item level difficulty [2, 13] and also
to model the annotator performance based on the feature vector [14]. For simplicity we use the
basic model proposed in [7] in our formulation. Based on many instances labeled by multiple
annotators the maximum likelihood estimator for the annotator parameters (?j , ? j ) and also the
consensus ground truth (yi ) can be estimated iteratively [3, 7] via the Expectation Maximization
(EM) algorithm. The EM algorithm iteratively establishes a particular gold standard (initialized via
majority voting), measures the performance of the annotators given that gold standard (M-step), and
refines the gold standard based on the performance measures (E-step).
Who is a spammer? Intuitively, a spammer assigns labels randomly?maybe because the annotator
does not understand the labeling criteria, does not look at the instances when labeling, or maybe a
bot pretending to be a human annotator. More precisely an annotator is a spammer if the probability
of observed label yij being one given the true label yi is independent of the true label, i.e.,
Pr[yij = 1|yi ] = Pr[yij = 1].
This means that the annotator is assigning labels randomly by flipping a coin with bias
without actually looking at the data. Equivalently (1) can be written as
Pr[yij = 1|yi = 1] = Pr[yij = 1|yi = 0] which implies ?j = 1 ? ? j .
(1)
Pr[yij
= 1]
(2)
Hence in the context of the annotator model defined earlier a perfect spammer is an annotator for
whom ?j + ? j ? 1 = 0. This corresponds to the diagonal line on the Receiver Operating Characteristic (ROC) plot (see Figure 1(a)) 2 . If ?j + ? j ? 1 < 0 then the annotators lies below the diagonal
line and is a malicious annotator who flips the labels. Note that a malicious annotator has discriminatory power if we can detect them and flip their labels. In fact the methods proposed in [3, 7] can
automatically flip the labels for the malicious annotators. Hence we define the spammer score for
an annotator as
S j = (?j + ? j ? 1)2
(3)
An annotator is a spammer if S j is close to zero. Good annotators have S j > 0 while a perfect
annotator has S j = 1.
1
One of the commonly used strategy to filter out spammers is to inject some items into the annotations with
known labels. This is the strategy used by CrowdFlower (http://crowdflower.com/docs/gold).
2
Also note that (?j + ? j )/2 is equal to the area shown in the plot and can be considered as a non-parametric
approximation to the area under the ROC curve (AUC) based on one observed point. It is also equal to the
Balanced Classification Rate (BCR). So a spammer can also be defined as having BCR or AUC equal to 0.5.
2
Equal accuracy contours (prevalence=0.5)
0.9
1
Good
Annotators
Biased
Annotators
0.8
0.9
Sensitivity
Sensitivity ( ?j )
Spammers
0.5
0.4
j
0.3
j
0.6
0.5
0.4
4
0.
5
0.
3
0.
7
0.
6
0.
4
0.
5
0.
2
0.
3
0.
Malicious
Annotators
0.2
0.4
0.6
1?Specificity ( ?j )
0.8
0.1
1
3
0.
1
0.
0.
2
0.
3
0.
1
0.
0.6
0.5
1
0.
2
0.
2
0.
1
0.
0.4
3
1
0.
0.3
0.2
Biased
Annotators
4
0.7
6
0.
4
0.
0.8
7
0.3
Area = (? +? )/2
0.2
0
0
0.9
0.
8
0.
8
0. 0.7 6
0. .5
0
5
0.
0.7
[ 1??j, ?j ]
0.6
0.1
6
0.
0.8
0.7
Equal spammer score contours
1
7
0.
8
0.
9
0.
Sensitivity
1
0.
2
0.
4
0.
0.2
4
0.
5
0.
0
0
(a) Binary annotator model
0.1
1
0.
2
0.
3
0.
0.2
0.4
0.6
1?Specificity
0.8
1
1
0.
0
0
(b) Accuracy
0.2
3
0.
4
0.
0.4
0.6
1?Specificity
5
0. .6 7
0 0. 8
0.
0.8
1
(c) Spammer score
Figure 1: (a) For binary labels an annotator is modeled by his/her sensitivity and specificity. A perfect spammer
lies on the diagonal line on the ROC plot. (b) Contours of equal accuracy (4) and (c) equal spammer score (3).
Accuracy This notion of a spammer is quite different for that of the accuracy of an annotator. An
annotator with high accuracy is a good annotator but one with low accuracy is not necessarily a
spammer. The accuracy is computed as
Accuracyj = Pr[yij = yi ] =
1
X
Pr[yij = 1|yi = k]Pr[yi = k] = ?j p + ? j (1 ? p),
(4)
k=0
where p := Pr[yi = 1] is the prevalence of the positive class. Note that accuracy depends on
prevalence. Our proposed spammer score does not depend on prevalence and essentially quantifies
the annotator?s inherent discriminatory power. Figure 1(b) shows the contours of equal accuracy
on the ROC plot. Note that annotators below the diagonal line (malicious annotators) have low
accuracy. The malicious annotators are good annotators but they flip their labels and as such are not
spammers if we can detect them and then correct for the flipping. In fact the EM algorithms [3, 7]
can correctly flip the labels for the malicious annotators and hence they should not be treated as
spammers. Figure 1(c) also shows the contours of equal score for our proposed score and it can be
seen that the malicious annotators have a high score and only annotators along the diagonal have a
low score (spammers).
Log-odds Another interpretation of a spammer can be seen from the log odds. Using Bayes? rule
the posterior log-odds can be written as
log
Pr[yi = 1|yij ]
Pr[yi =
0|yij ]
= log
Pr[yij |yi = 1]
Pr[yij |yi
= 0]
+ log
p
.
1?p
Pr[y =1|y j ]
p
. Essentially the annotator
If an annotator is a spammer (i.e., (2) holds) then log Pr[yi =0|yij ] = log 1?p
i
i
provides no information in updating the posterior log-odds and hence does not contribute to the
estimation of the actual true label.
3 Spammer score for categorical labels
Annotator model Suppose there are K ? 2 categories. We introduce a multinomial parameter
?jc = (?jc1 , . . . , ?jcK ) for each annotator, where
?jck := Pr[yij = k|yi = c]
and
K
X
?jck = 1.
k=1
?jck
The term
denotes the probability that annotator j assigns class k to an instance given that the
true class is c. When K = 2, ?j11 and ?j00 are sensitivity and specificity, respectively.
Who is a spammer? As earlier a spammer assigns labels randomly, i.e.,
Pr[yij = k|yi ] = Pr[yij = k], ?k.
3
This is equivalent to Pr[yij = k|yi = c] = Pr[yij = k|yi = c0 ], ?c, c0 , k = 1, . . . , K? which means
knowing the true class label being c or c0 does not change the probability of the annotator?s assigned
label. This indicates that the annotator j is a spammer if
?jck = ?jc0 k , ?c, c0 , k = 1, . . . , K.
(5)
j
Let Aj be the K ? K confusion rate matrix with
entries [A ]ck = ?ck ?a spammer would have
0.50
0.25
0.25
0.50
0.25
0.25
all the rows of Aj equal, for example, Aj =
, for a three class categorical
0.50
0.25
0.25
annotation problem. Essentially Aj is a rank one matrix of the form Aj = ev>
j , for some column
K
>
vector vj ? R that satisfies vj e = 1, where e is column vector of ones.
In the binary case we had this natural notion of spammer as an annotator for whom ?j + ? j ? 1 was
close to zero. One natural way to summarize (5) would be in terms of the distance (Frobenius norm)
of the confusion matrix to the closest rank one approximation, i.e,
2
S j := kAj ? e?v>
j kF ,
(6)
where ?
vj solves
2
?
vj = arg min kAj ? ev>
j kF
vj
s.t. v>
j e = 1.
(7)
Solving (7) yields ?
vj = (1/K)Aj > e, which is the mean of the rows of Aj . Then from (6) we have
2
1 >
1 XX j
j
S =
I ? ee
Aj
=
(?ck ? ?jc0 k )2 .
K
K
0
F
c<c
k
j
So a spammer is an annotator for whom S is close to zero. A perfect annotator has S j = K ? 1.
We normalize this score to lie between 0 and 1.
XX j
1
(?ck ? ?jc0 k )2
Sj =
(8)
K(K ? 1)
0
c<c
k
When K = 2 this is equivalent to the score proposed earlier for binary labels. As earlier this
notion of a spammer is different than the accuracy computed from the confusion rate matrix and
PK
the prevalence. The accuracy is computed as Accuracyj = Pr[yij = yi ] = k=1 Pr[yij = k|yi =
PK
k]Pr[yi = k] = k=1 ?jkk Pr[yi = k].
4 Spammer score for ordinal labels
A commonly used paradigm to annotate instances is to use ordinal scales where an annotator is
asked to rate an instance on a certain ordinal scale, say {1, . . . , K}. For example, rating a restaurant
on a scale of 1 to 5 or assessing the malignancy of a lesion on a BIRADS scale of 1 to 5 for
mammography. This differs from categorical labels where there is no order among the multiple
class labels. An ordinal variable expresses rank and there is an implicit ordering 1 < . . . < K.
Annotator model It is conceptually easier to think of the true label to be binary, that is, yi ? {0, 1}.
For example in mammography a lesion is either malignant (1) or benign (0) (which can be confirmed
by biopsy) and the BIRADS ordinal scale is a means for the radiologist to quantify the uncertainty
based on the digital mammogram. The radiologist assigns a higher value of the label if he/she
thinks the true label is closer to one. As earlier we characterize each annotator by the sensitivity
and the specificity, but the main difference is that we now define the sensitivity and specificity for
each ordinal label (or threshold) k ? {1, . . . , K}. Let ?jk and ?kj be the sensitivity and specificity
respectively of the j th annotator corresponding to the threshold k, that is,
?jk = Pr[yij ? k | yi = 1] and ?kj = Pr[yij < k | yi = 0].
j
= 1 from this definition. Hence each annotator
Note that ?j1 = 1, ?1j = 0 and ?jK+1 = 0, ?K+1
j
]. This corresponds to an
is parameterized by a set of 2(K ? 1) parameters [?j2 , ?2j , . . . , ?jK , ?K
empirical ROC curve for the annotator (Figure 2).
4
Who is a spammer? As earlier we define an an1
k=1
notator j to be a spammer if Pr[yij = k|yi = 1] =
0.9
j
k=2
0.8
Pr[yi = k|yi = 0] ?k = 1, . . . , K. Note that from
0.7
k=3 [ 1?? , ? ]
the annotation model we have 3 Pr[yij = k | yi =
0.6
j
j
j
1] = ?k ? ?k+1 and Pr[yi = k | yi = 0] =
0.5
k=4
j
0.4
?k+1
? ?kj . This implies that annotator j is a spam0.3
j
mer if ?jk ? ?jk+1 = ?k+1
? ?kj , ?k = 1, . . . , K,
0.2
0.1
which leads to ?jk + ?kj = ?j1 + ?1j = 1, ?k. This
0
0
0.2
0.4
0.6
0.8
1
means that for every k, the point (1 ? ?kj , ?jk ) lies on
1?Specificity ( ? )
the diagonal line in the ROC plot shown in Figure 2.
The area under the empirical ROC curve can be comPK
Figure 2: Ordinal labels: An annotator is modputed as (see Figure 2) AUCj = 21 k=1 (?jk+1 + eled by sensitivity/specificity for each threshold.
j
?jk )(?k+1
? ?kj ), and can be used to define the following spammer score as (2AUCj ? 1)2 to rank the
different annotators.
"K
#
!2
X j
j
j
j
j
S =
(9)
(?k+1 + ?k )(?k+1 ? ?k ) ? 1
3
Sensitivity ( ?j )
3
j
k=1
With two levels this expression defaults to the binary case. An annotator is a spammer if S j is close
to zero. Good annotators have S j > 0 while a perfect annotator has S j = 1.
5 Previous work
Recently Ipeirotis et.al. [4] proposed a score for categorical labels based on the expected cost of
the posterior label. In this section we briefly describe their approach and compare it with our proposed score. For each instance labeled by the annotator they first compute the posterior (soft) label
Pr[yi = c|yij ] for c = 1, . . . , K, where yij is the label assigned to the ith instance by the j th
annotator and yi is the true unknown label. The posterior label is computed via Bayes? rule as
j
Pr[yi = c|yij ] ? Pr[yij |yi = c]Pr[yi = c] = (?jck )?(yi ,k) pc , where pc = Pr[yi = c] is the prevalence of class c. The score for a spammer is based on the intuition that the posterior label vector
(Pr[yi = 1|yij ], . . . , Pr[yi = K|yij ]) for a good annotator will have all the probability mass concentrated on single class. For example for a three class problem (with equal prevalence), a posterior label
vector of (1, 0, 0) (certain that the class is one) comes from a good annotator while a (1/3, 1/3, 1/3)
(complete uncertainty about the class label) comes from spammer. Based on this they define the
following score for each annotator
"K K
#
N
1 X XX
j
j
j
Score =
costck Pr[yi = k|yi ]Pr[yi = c|yi ] .
(10)
N i=1 c=1
k=1
where costck is the misclassification cost when an instance of class c is classified as k. Essentially
this is capturing some sort of uncertainty of the posterior label averaged over all the instances.
Perfect workers have a score Scorej = 0 while spammers will have high score. An entropic version
of this score based on similar ideas has also been recently proposed in [5]. Our proposed spammer
score differs from this approach in the following aspects: (1) Implicit in the score defined above (10)
is the assumption that an annotator is a spammer when Pr[yi = c|yij ] = Pr[yi = c], i.e., the estimated
posterior labels are simply based on the prevalence and do not depend on the observed labels. By
Bayes? rule this is equivalent to Pr[yij |yi = c] = Pr[yij ] which is what we have used to define
our spammer score. (2) While both notions of a spammer are equivalent, the approach of [4] first
computes the posterior labels based on the observed data, the class prevalence and the annotator
This can be seen as follows: Pr[yij = k | yi = 1] = Pr[(yij ? k) AND (yij < k + 1) | yi = 1] = Pr[yij ?
k | yi = 1] + Pr[yij < k + 1 | yi = 1] ? Pr[(yij ? k) OR (yij < k + 1) | yi = 1] = Pr[yij ? k | yi =
1] ? Pr[yij ? k + 1 | yi = 1] = ?jk ? ?jk+1 . Here we used the fact that Pr[(yij ? k) OR (yij < k + 1)] = 1.
3
5
simulated | 500 instances | 30 annotators
simulated | 500 instances | 30 annotators
1
12
0.8
Spammer Score
18
0.6
0.5
2422
23
25
0.3
29
20
0.2
0.4
0.2
30
16
14
0.1
26
21
27
28
19
0
0 13
0
0.2
0.4
0.6
1?Specificity
0.8
1
500
500
500
500
500
500
500
500
500
500
0.4
0.6
500
500
500
1
0.7
500
500
500
500
500
500
500
500
500
500
500
500
500
500
3
1
500
500
500
210
8
7 5
17
4
9
27
8
30
6
3
28
7
10
2
23
22
26
24
5
1
21
29
25
14
12
17
11
18
20
19
15
16
13
4
0.8
Sensitivity
6
9
0.9
15
11
Annotator
(a) Simulation setup
(b) Annotator ranking
Annotator rank (median) via accuracy
simulated | 500 instances | 30 annotators
Annotator rank (median) via Ipeirotis et.al.[4]
simulated | 500 instances | 30 annotators
27
30
30
28
22
23
26
25
24
21
29
25
20
14
17
18
15
16
20
15
11
13
12
19
1
10
5
10 2
7
6 3
5
8
9
4
0
0
5
10
15
20
25
Annotator rank (median) via spammer score
30
(c) Comparison with accuracy
30
18
20
16
15
19
13
12
25
11
14
17
25
20
29
21
51
15
26
22
23
24
10 2
7
10
28
6 3
8
30
5
27
9
4
0
0
5
10
15
20
25
Annotator rank (median) via spammer score
30
(d) Comparison with Ipeirotis et. al. [4]
Figure 3: (a) The simulation setup consisting of 10 good annotators (annotators 1 to 10), 10 spammers (11 to
20), and 10 malicious annotators (21 to 30). (b) The ranking of annotators obtained using the proposed spammer
score. The spammer score ranges from 0 to 1, the lower the score, the more spammy the annotator. The mean
spammer score and the 95% confidence intervals (CI) are shown?obtained from 100 bootstrap replications.
The annotators are ranked based on the lower limit of the 95% CI. The number at the top of the CI bar shows the
number of instances annotated by that annotator. (c) and (d) Comparison of the median rank obtained via the
spammer score with the rank obtained using (c) accuracy and (d) the method proposed by Ipeirotis et. al. [4].
parameters and then computes the expected cost. Our proposed spammer score does not depend
on the prevalence of the class. Our score is also directly defined only in terms of the annotator
confusion matrix and does not need the observed labels. (3) For the score defined in (10) while
perfect annotators have a score of 0 it is not clear what should be a good baseline for a spammer.
The authors suggest to compute the baseline by assuming that a worker assigns as label the class
with maximum prevalence. Our proposed score has a natural scale with a perfect annotator having
a score of 1 and a spammer having a score of 0. (4) However one advantage of the approach in [4]
is that they can directly incorporate varied misclassification costs.
6 Experiments
Ranking annotators based on the confidence interval As mentioned earlier the annotator model
parameters can be estimated using the iterative EM algorithms [3, 7] and these estimated annotator
parameters can then be used to compute the spammer score. The spammer score can then be used
to rank the annotators. However one commonly observed phenomenon when working with crowdsourced data is that we have a lot of annotators who label only a very few instances. As a result the
annotator parameters cannot be reliably estimated for these annotators. In order to factor this uncertainty in the estimation of the model parameters we compute the spammer score for 100 bootstrap
replications. Based on this we compute the 95% confidence intervals (CI) for the spammer score for
each annotator. We rank the annotators based on the lower limit of the 95% CI. The CIs are wider
6
Table 1: Datasets N is the number of instances. M is the number of annotators. M ? is the mean/median
number of annotators per instance. N ? is the mean/median number of instances labeled by each annotator.
Dataset
Type
N
M
M?
N?
bluebird
binary
108
39
39/39
108/108
temp
binary
462
76
10/10
61/16
Brief Description
wsd
categorical/3
177
34
10/10
52/20
sentiment
categorical/3
1660
33
6/6
291/175
30
100
10
38
10/10
10/10
30/30
26/20
30
30
30
30
30
30
3
30
30
1
30
20
20
20
30
20
Spammer Score
60
20
20
20
20
20
20
17
17
40
20
10
7
9
8
6
5
13
31
10
23
29
1
2
4
6
8
9
14
15
17
22
32
5
18
16
19
11
12
20
21
24
25
26
27
28
30
33
34
7
3
0
2
108
108
108
108
108
108
108
108
108
108
108
108
108
108
108
108
108
0.2
20
20
20
20
20
40
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
40
60
0.6
0.4
238
171
75
654
100
12
77
67
917
104
284
374
249
229
453
346
428
20
40
20
0.2
Annotator
Annotator
0
1
26
10
18
28
15
5
36
23
12
8
32
31
38
13
17
27
11
2
35
24
19
9
6
30
33
37
14
29
4
3
20
34
22
25
7
16
21
0
26
2
6
11
5
14
3
20
9
22
31
10
12
18
8
13
30
4
1
29
19
17
27
28
21
15
25
23
7
33
16
24
32
10
132
10
360
10
Spammer Score
43
175
119
0.4
13
18
52
75
33
32
12
74
31
51
41
55
7
14
70
42
58
65
43
1
10
47
61
73
25
37
76
67
24
46
54
48
39
56
15
62
68
44
53
64
40
9
28
6
2
57
3
4
5
8
11
16
17
19
20
21
22
23
26
27
29
30
34
35
36
38
45
49
50
59
60
63
66
69
71
72
442
462
452
10
10
20
20
20
20
60
40
20
15
7
7
11
12
29
35
525
437
0.8
0.2
0.2
1
87
541
0.6
1211
1099
10
Spammer Score
0.8
572
20
22
10
10
10
10
1
30
52
402
10
10
10
10
10
10
10
12
valence | 100 instances | 38 annotators
10
10
10
10
10
10
10
10
10
10
10
10
10
sentiment | 1660 instances | 33 annotators
10
50
10
10
40
10
70
350
80
40
100
192
190
40
32
60
70
20
20
40
80
20
50
50
50
30
10
30
10
30
20
10
Annotator
temp | 462 instances | 76 annotators
0.4
0
0.4
Annotator
60
0.6
0
0.6
Annotator
30
Spammer Score
0.8
20
20
77
117
20
77
77
100
Spammer Score
0.4
108
108
108
108
108
108
108
108
108
108
108
108
108
0.8
0.6
0.2
17
8
27
30
25
35
1
12
32
37
38
16
22
9
29
15
20
19
5
39
3
21
23
14
2
10
24
7
33
13
36
31
4
34
28
18
11
6
26
0.2
1
4
108
108
108
108
0.4
0
wosi | 30 instances | 10 annotators
1
0.8
108
108
0.6
108
108
108
Spammer Score
0.8
wsd | 177 instances | 34 annotators
1
80
177
157
177
157
bluebird | 108 instances | 39 annotators
1
word similarity [10] Numeric judgements of word similarity.
affect recognition [10] Each annotator is presented with a short headline and asked
to rate it on a scale [-100,100] to denote the overall positive or negative valence.
40
40
20
ordinal/[0 10]
ordinal[-100 100]
word sense disambiguation [10] The labeler is given a paragraph of text containing
the word ?president? and asked to label one of the three appropriate senses.
irish economic sentiment analysis [1] Articles from three Irish online news sources
were annotated by volunteer users as positive, negative, or irrelevant.
20
20
20
wosi
valence
bird identification [12] The annotator had to identify whether there was an Indigo
Bunting or Blue Grosbeak in the image.
event annotation [10] Given a dialogue and a pair of verbs annotators need to label
whether the event described by the first verb occurs before or after the second.
Annotator
Figure 4: Annotator Rankings The rankings obtained for the datasets in Table 1. The spammer score ranges
from 0 to 1, the lower the score, the more spammy the annotator. The mean spammer score and the 95%
confidence intervals (CI) are shown?obtained from 100 bootstrap replications. The annotators are ranked
based on the lower limit of the 95% CI. The number at the top of the CI bar shows the number of instances
annotated by that annotator. Note that the CIs are wider when the annotator labels only a few instances.
when the annotator labels only a few instances. For a crowdsourced labeling task the annotator has
to be good and also label a reasonable number of instances in order to be reliably identified.
Simulated data We first illustrate our proposed spammer score on simulated binary data (with equal
prevalence for both classes) consisting of 500 instances labeled by 30 annotators of varying sensitivity and specificity (see Figure 3(a) for the simulation setup). Of the 30 annotators we have 10 good
annotators (annotators 1 to 10 who lie above the diagonal in Figure 3(a)), 10 spammers (annotators
11 to 20 who lie around the diagonal), and 10 malicious annotators (annotators 21 to 30 who lie below the diagonal). Figure 3(b) plots the ranking of annotators obtained using the proposed spammer
score with the annotator model parameters estimated via the EM algorithm [3, 7]. The spammer
score ranges from 0 to 1, the lower the score, the more spammy the annotator. The mean spammer
score and the 95% confidence interval (CI) obtained via bootstrapping are shown. The annotators
are ranked based on the lower limit of the 95% CI. As can be seen all the spammers (annotators 11
to 20) have a low spammer score and appear at the bottom of the list. The malicious annotators have
higher score than the spammers since we can correct for their flipping. The malicious annotators
are good annotators but they flip their labels and as such are not spammers if we detect that they are
malicious. Figure 3(c) compares the (median) rank obtained via the spammer score with the (median) rank obtained using accuracy as the score to rank the annotators. While the good annotators
are ranked high by both methods the accuracy score gives a low rank to the malicious annotators.
Accuracy does not capture the notion of a spammer. Figure 3(d) compares the ranking with the
method proposed by Ipeirotis et. al. [4] which gives almost similar rankings as our proposed score.
7
21
23 10
6
35
4
34
1126
18
147
30
3
31
13
2436
33
25
5
2
20
15
19
39
15
20
28
22 299
12
37
16 38
10
1
32
5
27 25
35
30
8
17
0
0
5
10
15
20
25
30
35
Annotator rank (median) via spammer score
40
bluebird | 108 instances | 39 annotators
40
1
6
342618
11
4
35
31
1013 7
30
2
28
21
5
20
15 39
19
20
15
22
37
16
299
12
38
10
5
8
17
0
0
30
25
0.6
0.5
35
32
2
0.4
(a)
34
15
36
11
13
31
19
39
24
10
33
28
21
26
18
0.1
0
0
40
6
4
22
37 20
38
29
9
0.2
5
10
15
20
25
30
35
Annotator rank (median) via spammer score
7
16
0.3
1
32
27 25
35
30
0.8
14
3
27
5
1
3
12
8
0.9
17
0.7
24 33
14
36
23
25
Sensitivity
Annotator rank (median) via accuracy
bluebird | 108 instances | 39 annotators
Annotator rank (median) via Ipeirotis et.al.[4]
bluebird | 108 instances | 39 annotators
40
23
0.2
0.4
0.6
1?Specificity
(b)
0.8
1
(c)
Figure 5: Comparison of the rank obtained via the spammer score with the rank obtained using (a) accuracy
and (b) the method proposed by Ipeirotis et. al. [4] for the bluebird binary dataset. (c) The annotator model
parameters as estimated by the EM algorithm [3, 7].
19
25
12
18
7
3
14
20
5
32
8
1
16
20
9
21
15
34
10
29
31
17
28
22
26
2315
5
2
0
0
4
6
13
10
5
10
15
20
25
30
Annotator rank (median) via spammer score
35
30
25
16
19
7
25
8
9
27
3
14
28
17
18
5
32
10
4
2
10
6
1529
31
23
22
21
15
0
0
33
30
11
1
20
5
sentiment | 1660 instances | 33 annotators
24
35
12
20
24
34
26
13
5
10
15
20
25
30
Annotator rank (median) via spammer score
35
33
7
30
15
17
25
28 2719
2223
20
8 1
4
1812
15
13
10 20
32
30
10
3
29
9
31
16
5
2
6
5
11
14
26
0
0
5
10
15
20
25
30
Annotator rank (median) via spammer score
25
21
Annotator rank (median) via Ipeirotis et.al.[4]
25
24
27
Annotator rank (median) via accuracy
Annotator rank (median) via accuracy
30
sentiment | 1660 instances | 33 annotators
wsd | 177 instances | 34 annotators
33
30
11
Annotator rank (median) via Ipeirotis et.al.[4]
wsd | 177 instances | 34 annotators
35
7
30
15
19
17
27
25
21
25
8
12 4
18
20
24
15
20
33
10
3
13
9
28
1
29
23
10
1632
11
14
5
2
6
5
31
30
22
26
0
0
5
10
15
20
25
30
Annotator rank (median) via spammer score
Figure 6: Comparison of the median rank obtained via the spammer score with the rank obtained using
accuracy and he method proposed by Ipeirotis et. al. [4] for the two categorial datasets in Table 1.
Mechanical Turk data We report results on some publicly available linguistic and image annotation
data collected using the Amazon?s Mechanical Turk (AMT) and other sources. Table 1 summarizes
the datasets. Figure 4 plots the spammer scores and rankings obtained. The mean and the 95% CI
obtained via bootstrapping are also shown. The number at the top of the CI bar shows the number
of instances annotated by that annotator. The rankings are based on the lower limit of the 95% CI
which factors the number of instances labeled by the annotator into the ranking. An annotator who
labels only a few instances will have very wide CI. Some annotators who label only a few instances
may have a high mean spammer score but the CI will be wide and hence ranked lower. Ideally we
would like to have annotators with a high score and at the same time label a lot of instances so that
we can reliablly identify them. The authors [1] for the sentiment dataset shared with us some of the
qualitative observations regarding the annotators and they somewhat agree with our rankings. For
example the authors made the following comments about Annotator 7 ?Quirky annotator - had a lot
of debate about what was the meaning of the annotation question. I?d say he changed his labeling
strategy at least once during the process?. Our proposed score gave a low rank to this annotator.
Comparison with other approaches Figure 5 and 6 compares the proposed ranking with the rank
obtained using accuracy and the method proposed by Ipeirotis et. al. [4] for some binary and categorical datasets in Table 1. Our proposed ranking is somewhat similar to that obtained by Ipeirotis
et. al. [4] but accuracy does not quite capture the notion of spammer. For example for the bluebird
dataset for annotator 21 (see Figure 5(a)) accuracy ranks it at the bottom of the list while the proposed score puts is in the middle of the list. From the estimated model parameters it can be seen that
annotator 21 actually flips the labels (below the diagonal in Figure 5(c)) but is a good annotator.
7 Conclusions
We proposed a score to rank annotators for crowdsourced binary, categorical, and ordinal labeling
tasks. The obtained rankings and the scores can be used to allocate monetary bonuses to be paid
to different annotators and also to eliminate spammers from further labeling tasks. A mechanism
to rank annotators should be desirable feature of any crowdsourcing service. The proposed score
should also be useful to specify the prior for Bayesian approaches to consolidate annotations.
8
References
[1] A. Brew, D. Greene, and P. Cunningham. Using crowdsourcing and active learning to track
sentiment in online media. In Proceedings of the 6th Conference on Prestigious Applications
of Intelligent Systems (PAIS?10), 2010.
[2] B. Carpenter. Multilevel bayesian models of categorical data annotation. Technical Report
available at http://lingpipe-blog.com/lingpipe-white-papers/, 2008.
[3] A. P. Dawid and A. M. Skene. Maximum likeihood estimation of observer error-rates using
the EM algorithm. Applied Statistics, 28(1):20?28, 1979.
[4] P. G. Ipeirotis, F. Provost, and J. Wang. Quality management on Amazon Mechanical Turk.
In Proceedings of the ACM SIGKDD Workshop on Human Computation (HCOMP?10), pages
64?67, 2010.
[5] V. C. Raykar and S. Yu. An entropic score to rank annotators for crowdsourced labelling tasks.
In Proceedings of the Third National Conference on Computer Vision, Pattern Recognition,
Image Processing and Graphics (NCVPRIPG), 2011.
[6] V. C. Raykar, S. Yu, L .H. Zhao, A. Jerebko, C. Florin, G. H. Valadez, L. Bogoni, and L. Moy.
Supervised learning from multiple experts: Whom to trust when everyone lies a bit. In Proceedings of the 26th International Conference on Machine Learning (ICML 2009), pages 889?
896, 2009.
[7] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning
from crowds. Journal of Machine Learning Research, 11:1297?1322, April 2010.
[8] V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get another label? Improving data quality and data
mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, pages 614?622, 2008.
[9] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective
labelling of venus images. In Advances in Neural Information Processing Systems 7, pages
1085?1092. 1995.
[10] R. Snow, B. O?Connor, D. Jurafsky, and A. Y. Ng. Cheap and Fast?but is it good? Evaluating
Non-Expert Annotations for Natural Language Tasks. In Proceedings of the Conference on
Empirical Methods in Natural Language Processing (EMNLP ?08), pages 254?263, 2008.
[11] A. Sorokin and D. Forsyth. Utility data annotation with Amazon Mechanical Turk. In Proceedings of the First IEEE Workshop on Internet Vision at CVPR 08, pages 1?8, 2008.
[12] P. Welinder, S. Branson, S. Belongie, and P. Perona. The multidimensional wisdom of crowds.
In Advances in Neural Information Processing Systems 23, pages 2424?2432. 2010.
[13] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more:
Optimal integration of labels from labelers of unknown expertise. In Advances in Neural
Information Processing Systems 22, pages 2035?2043. 2009.
[14] Y. Yan, R. Rosales, G. Fung, M. Schmidt, G. Hermosillo, L. Bogoni, L. Moy, and J. Dy. Modeling annotator expertise: Learning when everybody knows a bit of something. In Proceedings
of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS
2010), pages 932?939, 2010.
9
| 4469 |@word version:1 briefly:1 judgement:1 middle:1 norm:1 c0:4 simulation:3 paid:2 score:92 hermosillo:1 subjective:1 com:4 assigning:1 written:2 refines:1 j1:2 benign:1 cheap:2 plot:7 intelligence:1 item:2 indicative:1 ruvolo:1 ith:2 short:3 provides:1 contribute:1 along:1 become:2 replication:3 qualitative:1 paragraph:1 baldi:1 introduce:1 expected:2 market:1 automatically:1 actual:2 xx:3 bonus:2 mass:1 advent:2 medium:1 what:3 kind:1 consolidated:1 unobserved:1 bootstrapping:2 every:1 multidimensional:1 voting:2 healthcare:2 control:1 appear:1 positive:6 service:4 before:1 limit:5 jerebko:1 bird:1 jurafsky:1 ease:1 branson:1 discriminatory:2 range:3 mer:1 averaged:1 differs:2 movellan:1 prevalence:12 bootstrap:3 area:4 empirical:3 yan:1 significantly:1 confidence:5 word:4 indigo:1 specificity:14 suggest:1 get:4 cannot:1 close:8 unlabeled:1 put:1 context:2 equivalent:4 amazon:4 simplicity:1 assigns:6 correcting:1 mammography:2 estimator:1 rule:3 his:2 notion:9 president:1 suppose:1 user:1 smyth:1 pa:2 dawid:1 expensive:2 jk:12 updating:1 recognition:2 labeled:8 observed:7 bottom:2 wang:1 capture:2 news:1 ordering:1 decrease:1 balanced:1 intuition:1 mentioned:1 complexity:1 bunting:1 asked:3 ideally:1 depend:3 tight:1 solving:1 predictive:1 jc0:3 various:1 fast:1 effective:1 describe:1 artificial:1 labeling:11 crowd:2 quite:5 whose:1 cvpr:1 say:2 annotating:1 statistic:2 think:2 noisy:1 final:1 online:2 advantage:1 propose:1 j2:1 monetary:2 gold:6 description:1 frobenius:1 normalize:1 getting:1 assessing:1 perfect:8 wider:2 illustrate:1 solves:1 come:3 implies:2 quantify:2 differ:1 rosales:1 snow:1 biopsy:1 drawback:1 correct:3 annotated:4 filter:1 human:3 multilevel:1 assign:1 yij:47 extension:2 hold:1 around:1 considered:1 ground:2 entropic:2 crowdflower:2 estimation:4 label:75 headline:1 establishes:1 jkk:1 ck:4 varying:1 linguistic:1 she:1 rank:40 likelihood:1 indicates:1 sigkdd:2 baseline:2 detect:4 sense:1 typically:1 eliminate:2 cunningham:1 her:1 perona:2 transformed:1 interested:1 arg:1 classification:1 among:1 overall:1 integration:1 genuine:1 equal:12 once:1 having:7 ng:1 irish:2 labeler:1 yu:5 look:2 icml:1 report:2 intelligent:1 inherent:1 few:5 randomly:4 national:1 consisting:2 attempt:1 mining:2 pc:2 sens:1 radiologist:2 closer:1 worker:3 initialized:1 instance:45 column:2 earlier:8 soft:2 modeling:1 maximization:1 cost:6 entry:1 welinder:1 too:1 graphic:1 characterize:1 international:3 sensitivity:14 pool:1 management:1 containing:1 emnlp:1 expert:4 inject:1 dialogue:1 zhao:2 valadez:2 forsyth:1 jc:1 ranking:19 depends:1 later:1 lot:3 observer:1 start:1 bayes:3 crowdsourced:7 sort:1 annotation:12 contribution:1 publicly:2 accuracy:30 who:12 characteristic:1 yield:1 identify:2 wisdom:1 conceptually:1 identification:1 bayesian:2 expertise:3 confirmed:1 classified:1 definition:1 inexpensive:1 turk:5 dataset:7 knowledge:1 organized:1 formalize:3 sophisticated:1 actually:4 higher:2 supervised:2 specify:2 april:1 formulation:1 implicit:3 hand:1 working:1 sheng:1 trust:1 aj:8 quality:8 usa:2 true:11 burl:1 hence:7 assigned:3 iteratively:2 white:1 raykar:5 during:1 auc:2 everybody:1 criterion:2 complete:1 pais:1 confusion:7 image:4 meaning:1 novel:1 recently:3 multinomial:1 extend:1 interpretation:1 he:3 bluebird:7 connor:1 deny:1 language:3 had:3 malignancy:1 operating:1 similarity:2 labelers:2 something:1 posterior:11 closest:1 bergsma:1 irrelevant:1 prime:1 certain:2 binary:17 blog:1 yi:57 seen:5 somewhat:3 paradigm:1 multiple:8 desirable:2 hcomp:1 technical:1 basic:2 vision:3 expectation:1 metric:1 essentially:4 volunteer:1 annotate:1 separately:1 thirteenth:1 interval:5 median:22 malicious:15 source:2 biased:3 rest:1 comment:1 j11:1 odds:4 ee:1 easy:1 affect:1 restaurant:1 gave:1 florin:2 identified:1 economic:1 idea:1 regarding:1 knowing:2 venus:1 bottleneck:1 whether:2 expression:1 allocate:1 utility:1 sentiment:7 categorial:1 moy:3 spammer:105 useful:1 clear:1 maybe:4 amount:2 bcr:2 concentrated:1 category:1 http:2 bot:2 estimated:10 correctly:1 per:1 track:1 blue:1 diverse:1 express:1 threshold:3 parameterized:1 uncertainty:4 wsd:4 place:1 almost:1 reasonable:1 wu:1 doc:1 disambiguation:1 summarizes:1 consolidate:1 kaj:2 dy:1 bit:2 capturing:1 internet:1 dwell:1 greene:1 sorokin:1 precisely:1 aspect:1 min:1 fayyad:1 skene:1 fung:1 em:7 temp:2 intuitively:1 pr:53 agree:1 payment:1 count:1 mechanism:2 malignant:1 ordinal:13 flip:7 know:1 available:3 appropriate:2 schmidt:1 coin:1 vikas:2 denotes:1 top:3 include:1 grosbeak:1 build:1 eled:1 question:1 flipping:3 occurs:1 strategy:3 parametric:1 diagonal:10 distance:1 valence:3 simulated:6 majority:2 degrade:1 whom:4 collected:1 consensus:4 assuming:1 modeled:1 acquire:1 equivalently:1 setup:3 potentially:1 debate:1 whitehill:1 negative:3 reliably:2 pretending:2 unknown:2 observation:1 datasets:5 looking:2 varied:1 provost:2 verb:2 rating:1 pair:1 mechanical:5 bar:3 below:4 pattern:1 ev:2 summarize:2 including:1 everyone:1 power:2 misclassification:2 event:2 natural:6 difficulty:1 treated:1 ipeirotis:14 ranked:5 brief:1 categorical:12 kj:7 text:1 prior:1 discovery:1 kf:2 annotator:210 digital:1 article:1 row:2 changed:1 bias:3 understand:2 wide:2 curve:3 default:1 numeric:1 evaluating:1 contour:5 computes:2 author:3 made:2 commonly:3 novice:1 sj:1 active:1 receiver:1 belongie:1 consuming:1 iterative:2 quantifies:1 table:5 an1:1 reasonably:1 improving:1 shipeng:2 necessarily:1 domain:1 vj:6 aistats:1 pk:2 main:2 lesion:2 carpenter:1 malvern:2 roc:7 inferring:1 lie:8 third:1 mammogram:1 list:3 workshop:2 false:1 ci:17 labelling:2 easier:1 simply:1 bogoni:3 scalar:1 acquiring:2 corresponds:2 truth:2 satisfies:1 amt:1 acm:2 exposition:1 shared:1 change:1 specifically:1 likeihood:1 experimental:1 siemens:4 vote:1 incorporate:1 phenomenon:1 crowdsourcing:6 |
3,832 | 447 | Neural Control for Rolling Mills: Incorporating
Domain Theories to Overcome Data Deficiency
Martin Roscheisen
Computer Science Dept.
Munich Technical University
8 Munich 40, FRG
Reimar Hofmann
Computer Science Dept.
Edinburgh University
Edinburgh, EH89A, UK
Volker Tresp
Corporate R&D
Siemens AG
8 Munich 83, FRG
Abstract
In a Bayesian framework, we give a principled account of how domainspecific prior knowledge such as imperfect analytic domain theories can be
optimally incorporated into networks of locally-tuned units: by choosing
a specific architecture and by applying a specific training regimen. Our
method proved successful in overcoming the data deficiency problem in
a large-scale application to devise a neural control for a hot line rolling
mill. It achieves in this application significantly higher accuracy than
optimally-tuned standard algorithms such as sigmoidal backpropagation,
and outperforms the state-of-the-art solution.
1
INTRODUCTION
Learning in connectionist networks typically requires many training examples and
relies more or less explicitly on some kind of syntactic preference bias such as "minimal architecture" (Rumelhart, 1988; Le Cun et ai., 1990; Weigend, 1991; inter alia)
or a smoothness constraint operator (Poggio et ai., 1990), but does not make use of
explicit representations of domain-specific prior knowledge. If training data is deficient, learning a functional mapping inductively may no longer be feasible, whereas
this may still be the case when guided by domain knowledge. Controlling a rolling
mill is an example of a large-scale real-world application where training data is
very scarce and noisy, yet there exist much refined, though still very approximate,
analytic models that have been applied for the past decades and embody many
years of experience in this particular domain. Much in the spirit of Explanation659
660
Roscheisen, Hofmann, and Tresp
Based Learning (see, for example, Mitchell et ai., 1986; Minton et ai., 1986), where
domain knowledge is applied to get valid generalizations from only a few training
examples, we consider an analytic model as an imperfect domain theory from which
the training data is "explained" (see also Scott et ai., 1991; Bergadano et ai., 1990;
Tecuci et ai., 1990). Using a Bayesian framework, we consider in Section 2 the
optimal response of networks in the presence of noise on their input, and derive,
in Section 2.1, a familiar localized network architecture (Moody et ai., 1989,1990).
In Section 2.2, we show how domain knowledge can be readily incorporated into
this localized network by applying a specific training regimen. These results were
applied as part of a project to devise a neural control for a hot line rolling mill, and,
in Section 3, we describe experimental results which indicate that incorporating
domain theories can be indispensable for connectionist networks to be successful
in difficult engineering domains. (See also references for one of our more detailed
papers.)
2
2.1
THEORETICAL FOUNDATION
NETWORK ARCHITECTURE
We apply a Bayesian framework to systems where the training data is assumed to
be generated from the true model I, which itself is considered to be derived from a
domain theory b that is represented as a function. Since the measurements in our application are very noisy and clustered, we took this as the paradigm case, and assume
the actual input X EJRd to be a noisy version of one of a small number (N) of prototypical input vectors it, ... ,~EJRd where the noise is additive with covariance matrix~. The corresponding true output values I(it), . .. , I(~)E JR are assumed to be
distributed around the values suggested by the domain theory, b(it), ... , b(~) (variance C7~rior). Thus, each point in the training data D := {(Xi, Yi); i = 1, ... , M} is
considered to be generated as follows: Xi is obtained by selecting one of the t~ and
adding zero-mean noise with covariance ~, and Yi is generated by adding Gaussian
zero-mean noise with variance C7Jata to l(ik).l We determine the system's response
O( x) to an input X to be optimal with respect to the expectation of the squared
error (MMSE-estimate):
O(x) :=
argmin
?((/(1'true) - 0(x))2).
o(x)
t:
The expectation is given by "L,~1 P(Ttrue = IX = x) . (/(4) - 0(x))2. Bayes'
Theorem states that P(Ttrue = t;lx = x) = p(X = xlTtrue = 4) . P(7true
4) / p(X = x). Under the assumption that all 4 are equally likely, simplifying the
derivative of the expectation yields
=
IThis approach is related to Nowlan (1990) and MacKay (1991), but we emphasize the
influence of different priors over the hypothesis space by giving preference to hypotheses
that are closer to the domain theory.
Neural Control for Rolling Mills
where Ci equals ?(f(t:)ID), i.e. the expected value of f(t:) given that the training
data is exactly D. Assuming the input noise to be Gaussian and ~, unless otherwise
noted, to be diagonal, ~
(8 ij (Trhs,i,jS,d, the probability density of X under the
assumption that Ttrue equals 4 is given by
=
p(X
= xlTtrue = ~) = (27r)d/}'I~ll/2 exp [ -~(x -
4)t ~-1 (x -
where 1.1 is the determinant. The optimal response to an input
written as
O(x) = 2::~~exp[-t(X - t;)t ~-1 (x - t:)] . Ci
4)]
x can
now be
(1)
2::i=l exp[-~(x - t:)t ~-1 (x - t:)]
Equation 1 corresponds to a network architecture with N Gaussian Basis Functions
(GBFs) centered at 4, k = 1, ... ,N, each of which has a width (Ti, i = 1, ... ,d,
along the i-th dimension, and an output weight Ck. This architecture is known
to give smooth function approximations (Poggio et al., 1990; see also Platt , 1990),
and the normalized response function (partitioning-to-one) was noted earlier in
studies by Moody et al. (1988, 1989, 1990) to be beneficial to network performance.
Carving up an input space into hyperquadrics (typically hyperellipsoids or just
hyperspheres) in this way suffers in practice from the severe drawback that as soon
as the dimensionality of the input is higher, it becomes less feasible to cover the
whole space with units of only local relevance ("curse of dimensionality"). The
normalized response function has an essentially space-filling effect, and fewer units
have to be allocated while, at the same time, most of the locality properties can be
preserved such that efficient ball tree data structures (Omohundro, 1991) can still
be used. If the distances between the centers are large with respect to their widths,
the nearest-neighbor rule is recovered. With decreasing distances, the output of the
network changes more smoothly between the centers.
2.2
TRAINING REGIMEN
The output weights Ci are given by
Ci
= ?(f(t:)ID) =
1:
z? p(f(t:)
=
=
= zlD) dz.
=
=
Bayes' Theorem states that p(f(i;)
zlD) p(Dlf(i;) z) . p(f(i;) z) / p(D).
Let M (i) denote the set of indices j of the training data points (x j , Yj) that were
generated by adding noise to (i;, f( ii)), i. e. the points that "originated" from ii.
Note that it is not known a priori which indices a set M (i) contains; only posterior
probabilities can be given. By applying Bayes' Theorem and by assuming the
independence between different locations t:, the coefficients Ci can be written as 2
J
00
Ci
=
-00
n
Jn
mEM(i)
z?
00
-00
mEM(i)
[
_l(v-YmP]
exp [
1 (Z_y",)2]
exp -2'
2
O'~ata
0'2
data
exp[_l (Z-b(i'.))2]
2
0'2
prior
dz.
ex p [_1(V-b(i'.))2] dv
2
0'2
prior
2The normalization constants of the Gaussians in numerator and denominator cancel
as well as the product for all m~M(i) of the probabilities that (Xm, Ym) is in the data set.
661
662
Roscheisen, Hofmann, and Tresp
It can be easily shown that this simplifies to
+ k . b(t:)
IM(i)1 + k
LmEM(i) Ym
Ci =
(2)
=
where k
uJata/Uirior and I? I denotes the cardinality operator. In accordance
with intuition, the coefficients Ci turn out to be a weighted mean between the value
suggested by the domain theory b and the training data values which originated
from t:. The weighting factor k/(IM(i)1 + k) reflects the relative reliability of the
two sources of information, the empirical data and the prior knowledge.
Define Si as Si = (Ci - b(ik)) . k + LmEM(i)(Ci - Ym). Clearly, if ISil is minimized
to 0, then Ci reaches exactly the optimal value as it is given by equation 2. An
adaptive solution to this is to update Ci according to Ci
Si. Since the
membership distribution for M( i) is not known a priori, we approximate it using a
posterior estimate of the probability p(m E M(i)lxm) that m is in M(i) given that
xm was generated by some center t~, which is
= -'"'( .
( E M( Z')Ipm
Xm ) --
= xmlTtrue
= t:) _ .
_
Lk=l p(X = xmlTtrue = tk)
Mp(X
t:) is the activation acti of the i-th center, when the network is
presented with input xm . Substituting the equation in the sum of Si leads to the
following training regimen: Using stochastic sample-by-sample learning, we present
in each training step with probability 1 - A a data point Yi, and with probability A
a point b(ik) that is generated from the domain theory, where A is given by
p(X = xmlTtrtLe =
k?N
A:= k.N+M
(3)
(Recall that M is the total number of data points, and N is the number of centers.)
A varies from 0 (the data is far more reliable than the prior knowledge) to 1 (the
data is unreliable in comparison with the prior knowledge). Thus, the change of Ci
after each presentation is proportional to the error times the normalized activation
of the i-th center, acti / Lf=l actk.
The optimal positions for the centers t: are not known in advance, and we therefore
perform standard LMS gradient descent on t:, and on the widths Ui. The weight
updates in a learning step are given by a discretization of the following dynamic
equations (i=l, ... ,N; j=l, ... ,d):
.
t??
- 2",I . ~ . act Z? .
ZJ -
= -'"'( .
(u1)
ij
-2-
Ci
-
O(i)
1
. - 2 . (x?J - t ZJ.. )
L...,.k=l actk Uii
"",N
~. acti .
Ci-O(X)
N
.
(xi - tii)
2
Lk=l actk
where ~ is the interpolation error, acti is the (forward-computed) activity of the
the i-th center, and tii and Xi are the j-th component of t: and x respectively.
Neural Control for Rolling Mills
3
3.1
APPLICATION TO ROLLING MILL CONTROL
THE PROBLEM
In integrated steelworks, the finishing train of the hot line rolling mill transforms
preprocessed steel from a casting successively into a homogeneously rolled steelplate. Controlling this process is a notoriously hard problem: The underlying physical principles are only roughly known. The values ofthe control parameters depend
on a large number of entities, and have to be determined from measurements that
are very noisy, strongly clustered, "expensive," and scarce. 3 On the other hand,
reliability and precision are at a premium. Unreasonable predictions have to be
avoided under any circumstances, even in regions where no training data is available, and, by contract, an extremely high precision is required: the rolling tolerance
has to be guaranteed to be less than typically 20j.tm, which is substantial, particularly in the light of the fact that the steel construction that holds the rolls itself
expands for several millimeters under a rolling pressure of typically several thousands of tons. The considerable economic interest in improving adaptation methods
in rolling mills derives from the fact that lower rolling tolerances are indispensable
for the supplied industry, yet it has proven difficult to remain operational within
the guaranteed bounds under these constraints.
The control problem consists of determining a reduction schedule that specifies for
each pair of rolls their initial distance such that after the final roll pair the desired
thickness of the steel-plate (the actual feedback) is achieved. This reinforcement
problem can be reduced to a less complex approximation problem of predicting the
rolling force that is created at each pair of rolls, since this force can directly and
precisely be correlated to the reduction in thickness at a roll pair by conventional
means. Our task was therefore to predict the rolling force on the basis of nine
input variables like temperature and rolling speed, such that a subsequent conventional high-precision control can quickly reach the guaranteed rolling tolerance
before much of a plate is lost.
The state-of-the-art solution to this problem is a parameterized analytic model
that considers nine physical entities as input and makes use of a huge number
of tabulated coefficients that are adapted separately for each material and each
thickness class. The solution is known to give only approximate predictions about
the actual force, and although the on-line corrections by the high-precision control
are generally sufficient to reach the rolling tolerance, this process necessarily takes
more time, the worse the prediction is-resulting in a waste of more of the beginning
of a steel-plate. Furthermore, any improvement in the adaptation techniques will
also shorten the initialization process for a rolling mill, which currently takes several
months because of the poor generalization abilities of the applied method to other
thickness classes or steel qualities.
The data for our simulations was drawn from a rolling mill that was being installed
at the time of our experiments. It included measurements for around 200 different
steel qualities; only a few qualities were represented more than 100 times.
3The costs for a single sheet of metal-giving three useful data points that have to
be measured under difficult conditions-amount to a six-digit dollar sum. Only a limited
number of plates of the same steel quality is processed every week, causing the data scarcity.
663
664
Roscheisen, Hofmann, and Tresp
3.2
EXPERIMENTAL RESULTS
According to the results in Section 2, a network of the specified localized architecture was trained with data (artificially) generated from the domain theory and
data derived from on-line measurements. The remaining design considerations for
architecture selection were based on the extent to which a network had the capacity
to represent an instantiation of the analytic model (our domain theory):
Table 1 shows the approximation error of partitioning-to-one architectures with different degrees of freedom on their centers' widths. The variances of the GBFs were
either all equal and not adapted (GBFs with constant widths), or adapted individually for all centers (GBFs with spherical adaptation), or adapted individually for
all centers and every input dimension-leading to axially oriented hyperellipsoids
(GBFs with ellipsoidal adaptation). Networks with "full hyperquadric" GBFs, for
Method
GBFs with partitioning
constant widths
spherical adaptation
ellipsoidal "
G BFs no partitioning
MLP
Normalized Error
Squares [10- 2]
Maximum Error
0.40
0.18
0.096
0.85
0.38
2.1
1.7
0.41
5.3
3.4
[10- 2]
Table 1: Approximation of an instantiation of the domain theory: localized architectures (GBFs) and a network with sigmoidal hidden units (MLP).
which the covariance matrix is no longer diagonal, were also tested, but performed
clearly worse, apparently due to too many degrees of freedom. The table shows
that the networks with "ellipsoidal" GBFs performed best. Convergence time of
this type of network was also found to be superior. The table also gives the comparative numbers for two other architectures: GBFs without normalized response
function achieved significantly lower accuracy-even if they had far more centers
{performance is given for a net with 81 centers)-than those with partitioning and
only 16 centers. Using up to 200 million sample presentations, sigmoidal networks
trained with standard backpropagation (Rumelhart et al., 1986) achieved a yet
lower level-despite the use of weight-elimination (Le Cun, 1990), and an analysis
of the data's eigenvalue spectrum to optimize the learning rate (see also Le Cun,
1991). The indicated numbers are for networks with optimized numbers of hidden
units.
The value for A was determined according to equation 3 in Section 2.2 as A = 0.8;
the noise in our application could be easily estimated, since there are multiple
measurements for each input point available and the reliability of the domain theory
is known. Applying the described training regimen to the GBF-architecture with
ellipsoidal adaptation led to promising results:
Figure 1 shows the points in a "slice" through a specific point in the input space: the
measurements, the force as it is predicted by the analytic model and the network. It
can be seen that the net exhibits fail-safe behavior: it sticks closely to the analytic
model in regions where no data is available. If data points are available and suggest
Neural Control for Rolling Mills
Force
Force
Model
I
I
t
t
MOdel
\
Net
Net
Model
temperature
thickneSS
thickness
Figure 1: Prediction of the rolling force by the state-of-the-art model, by the neural
network, and the measured data points as a function of the input 'sheet thickness,'
and 'temperature.'
Method
Gaussian Units
Gaussian Units
MLP
A = 0.8
A = 0.4
Percent of Improvement
on Trained Samples
Percent of Improvement
at Generalization
18
41
16
14
3.1
3.9
Table 2: Relative improvement of the neural network solutions with respect to the
state-of-the-art model: on the training data and on the cross-validation set.
a different force, then the network modifies its output in direction of the data.
Table 2 shows to what extent the neural network method performed superior to
the currently applied state-of-the-art model (cross-validated mean). The numbers
indicate the relative improvement of the mean squared error of the network solution
with respect to an optimally-tuned analytic model. Although the data set was very
sparse and noisy, it was nevertheless still possible to give a better prediction. The
effect is also shown if a different value for A were chosen: the higher value of A, that
is, more prior knowledge, keeps the net from memorizing the data, and improves
generalization slightly. In case of the sigmoidal network, A was simply optimized
to give the smallest cross-validation error. When trained without prior knowledge,
none of the architectures lead to an improvement.
4
CONCLUSION
In a large-scale applications to devise a neural control for a hot line rolling mill,
training data turned out to be insufficient for learning to be feasible that is only
based on syntactic preference biases. By using a Bayesian framework, an imperfect
domain theory was incorporated as an inductive bias in a principled way. The
method outperformed the state-of-the-art solution to an extent which steelworks
automation experts consider highly convincing.
665
666
Roscheisen, Hofmann, and Tresp
Acknowledgements
This paper describes the first two authors' joint university project, which was supported
by grants from Siemens AG, Corporate R&D, and Studienstiftung des deutschen Volkes.
H. Rein and F. Schmid of the Erlangen steelworks automation group helped identify the
problem and sampled the data. W. Buttner and W. Finnoff made valuable suggestions.
References
Bergadano, F. and A. Giordana (1990). Guiding Induction with Domain Theories. In: Y.
Kodratoff et al. (eds.), Machine Learning, Vol. 3, Morgan Kaufmann.
Cun, Y. Le, J. S. Denker, and S. A. Solla (1990). Optimal Brain Damage. In: D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, Morgan Kaufmann.
Cun. Y. Le, I. Kanter and S. A. Solla (1991). Second Order Properties of Error Surfaces:
Learning Time and Generalization. In: R. P. Lippman et al. (eds.), Advances in
Neural Information Processing 3, Morgan Kaufmann.
Darken, Ch. and J. Moody (1990). Fast adaptive k-means clustering: some empirical
results. In: Proceedings of the IlCNN, San Diego.
Duda, R. O. and P. E. Hart (1973). Pattern Classification and Scene Analysis. NY: Wiley.
MacKay, D. (1991). Bayesian Modeling. Ph.D. thesis, Caltech.
Minton, S. N., J. G. Carbonell et al. (1989). Explanation-based Learning: A problemsolving perspective. Artificial Intelligence, Vol. 40, pp. 63-118.
Mitchell, T. M., R. M. Keller and S. T. Kedar-Cabelli (1986). Explanation-based Learning:
A unifying view. Machine Learning, Vol. 1, pp. 47-80.
Moody, J. (1990). Fast Learning in Multi-Resolution Hierarchies. In: D. S. Touretzky
(ed.), Advances in Neural Information Processing Systems 2, Kaufmann, pp. 29-39.
Moody, J. and Ch. Darken (1989). Fast Learning in Networks of Locally-tuned Processing
Units. Neural Computation, Vol. 1, pp. 281-294, MIT.
Moody, J. and Ch. Darken (1988). Learning with Localized Receptive Fields. In: D.
Touretzky et al. (eds.), Proc. of Connectionist Models Summer School, Kaufmann.
Nowlan, St. J. (1990). Maximum Likelihood Competitive Learning. In: D. S. Touretzky
(ed.,) Advances in Neural Information Processing Systems 2, Morgan Kaufmann.
Omohundro, S. M. (1991). Bump Trees for Efficient Function, Constraint, and Classification Learning. In: R. P. Lippman et al. (eds.), Advances in Neural Information
Processing 3, Morgan Kaufmann.
Platt, J. (1990). A Resource-Allocating Network for Function Interpolation. In: D. S.
Touretzky (ed.), Advances in Neural Information Processing Systems 2, Kaufmann.
Poggio, T. and F. Girosi (1990). A Theory of Networks for Approximation and Learning.
A.I. Memo No. 1140 (extended in No. 1167 and No. 1253), MIT.
Roscheisen, M., R. Hofmann, and V. Tresp (1992). Incorporating Domain-Specific Prior
Knowledge into Networks of Locally-Tuned Units. In: S. Hanson et al.(eds.), Computational Learning Theory and Natural Learning Systems, MIT Press.
Rumelhart, D. E., G. E. Hinton, and R. J. Williams (1986). Learning representations by
back-propagating errors. Nature, 323(9):533-536, October.
Rumelhart, D. E. (1988). Plenary Address, IJCNN, San Diego.
Scott, G.M., J. W. Shavlik, and W. H. Ray (1991). Refining PID Controllers using Neural
Networks. Technical Report, submitted to Neural Computation.
Tecuci, G. and Y. Kodratoff (1990). Apprenticeship Learning in Imperfect Domain Theories. In: Y. Kodratoff et al. (eds.), Machine Learning, Vol. 3, Morgan Kaufmann.
Weigend, A. (1991). Connectionist Architectures for Time-Series Prediction of Dynamical
Systems. Ph.D. thesis, Stanford.
| 447 |@word determinant:1 version:1 duda:1 isil:1 simulation:1 covariance:3 simplifying:1 pressure:1 ipm:1 reduction:2 initial:1 contains:1 series:1 selecting:1 tuned:5 mmse:1 outperforms:1 past:1 recovered:1 discretization:1 nowlan:2 si:4 yet:3 activation:2 written:2 readily:1 additive:1 subsequent:1 hofmann:6 analytic:8 girosi:1 update:2 intelligence:1 fewer:1 beginning:1 location:1 preference:3 lx:1 sigmoidal:4 along:1 ik:3 consists:1 acti:4 ray:1 apprenticeship:1 inter:1 expected:1 roughly:1 behavior:1 embody:1 multi:1 brain:1 decreasing:1 spherical:2 actual:3 curse:1 cardinality:1 becomes:1 project:2 underlying:1 what:1 kind:1 argmin:1 ag:2 every:2 ti:1 act:1 expands:1 exactly:2 uk:1 partitioning:5 control:12 unit:9 platt:2 grant:1 stick:1 before:1 engineering:1 local:1 accordance:1 installed:1 despite:1 id:2 interpolation:2 initialization:1 limited:1 carving:1 yj:1 practice:1 lost:1 lf:1 backpropagation:2 digit:1 lippman:2 empirical:2 significantly:2 suggest:1 get:1 selection:1 operator:2 sheet:2 applying:4 influence:1 optimize:1 conventional:2 center:14 dz:2 modifies:1 williams:1 keller:1 resolution:1 shorten:1 rule:1 bfs:1 controlling:2 construction:1 diego:2 hierarchy:1 hypothesis:2 rumelhart:4 expensive:1 particularly:1 ymp:1 thousand:1 region:2 solla:2 valuable:1 principled:2 intuition:1 substantial:1 ui:1 inductively:1 dynamic:1 trained:4 depend:1 basis:2 easily:2 joint:1 represented:2 train:1 fast:3 describe:1 artificial:1 choosing:1 refined:1 rein:1 kanter:1 stanford:1 otherwise:1 ability:1 syntactic:2 noisy:5 itself:2 final:1 ttrue:3 eigenvalue:1 net:5 took:1 product:1 adaptation:6 causing:1 turned:1 convergence:1 comparative:1 tk:1 derive:1 propagating:1 measured:2 nearest:1 ij:2 school:1 ex:1 predicted:1 indicate:2 direction:1 guided:1 safe:1 drawback:1 closely:1 stochastic:1 centered:1 material:1 elimination:1 frg:2 generalization:5 clustered:2 im:2 correction:1 hold:1 around:2 considered:2 exp:6 mapping:1 predict:1 week:1 lm:1 bump:1 substituting:1 achieves:1 smallest:1 proc:1 outperformed:1 currently:2 individually:2 weighted:1 reflects:1 mit:3 clearly:2 gaussian:5 ck:1 volker:1 casting:1 minton:2 dlf:1 derived:2 validated:1 finishing:1 refining:1 improvement:6 likelihood:1 dollar:1 membership:1 typically:4 integrated:1 hidden:2 classification:2 priori:2 art:6 mackay:2 equal:3 field:1 cancel:1 filling:1 minimized:1 connectionist:4 report:1 few:2 oriented:1 familiar:1 volkes:1 freedom:2 interest:1 huge:1 mlp:3 highly:1 severe:1 rolled:1 light:1 allocating:1 closer:1 experience:1 poggio:3 unless:1 tree:2 desired:1 theoretical:1 minimal:1 plenary:1 industry:1 earlier:1 modeling:1 cover:1 cost:1 rolling:22 successful:2 too:1 optimally:3 thickness:7 varies:1 st:1 density:1 cabelli:1 contract:1 ym:3 quickly:1 moody:6 squared:2 thesis:2 successively:1 worse:2 expert:1 derivative:1 leading:1 account:1 tii:2 de:1 waste:1 automation:2 coefficient:3 explicitly:1 mp:1 performed:3 helped:1 view:1 apparently:1 competitive:1 bayes:3 trh:1 square:1 accuracy:2 roll:5 variance:3 kaufmann:9 yield:1 ofthe:1 identify:1 millimeter:1 bayesian:5 regimen:5 none:1 notoriously:1 axially:1 submitted:1 reach:3 suffers:1 touretzky:5 ed:10 c7:1 pp:4 erlangen:1 sampled:1 finnoff:1 proved:1 mitchell:2 recall:1 knowledge:11 dimensionality:2 improves:1 schedule:1 back:1 higher:3 response:6 though:1 strongly:1 roscheisen:6 furthermore:1 just:1 hand:1 quality:4 indicated:1 effect:2 normalized:5 true:4 inductive:1 ll:1 numerator:1 width:6 noted:2 plate:4 omohundro:2 temperature:3 percent:2 consideration:1 superior:2 functional:1 physical:2 million:1 measurement:6 ai:8 smoothness:1 reimar:1 had:2 reliability:3 longer:2 surface:1 j:1 posterior:2 perspective:1 indispensable:2 yi:3 devise:3 caltech:1 seen:1 morgan:6 determine:1 paradigm:1 ii:2 full:1 corporate:2 multiple:1 smooth:1 technical:2 cross:3 hart:1 equally:1 prediction:6 denominator:1 essentially:1 expectation:3 circumstance:1 controller:1 normalization:1 represent:1 achieved:3 preserved:1 whereas:1 separately:1 source:1 allocated:1 deficient:1 spirit:1 presence:1 independence:1 architecture:14 imperfect:4 simplifies:1 tm:1 economic:1 six:1 tabulated:1 nine:2 generally:1 useful:1 detailed:1 transforms:1 amount:1 ellipsoidal:4 locally:3 ph:2 processed:1 reduced:1 specifies:1 supplied:1 exist:1 zj:2 estimated:1 vol:5 group:1 nevertheless:1 drawn:1 preprocessed:1 year:1 sum:2 weigend:2 parameterized:1 bound:1 guaranteed:3 summer:1 activity:1 adapted:4 ijcnn:1 constraint:3 deficiency:2 uii:1 precisely:1 scene:1 alia:1 u1:1 speed:1 extremely:1 lxm:1 martin:1 munich:3 according:3 ball:1 poor:1 jr:1 beneficial:1 remain:1 slightly:1 describes:1 cun:5 memorizing:1 explained:1 dv:1 pid:1 equation:5 resource:1 turn:1 fail:1 available:4 gaussians:1 unreasonable:1 apply:1 denker:1 homogeneously:1 jn:1 denotes:1 remaining:1 clustering:1 unifying:1 giving:2 damage:1 receptive:1 diagonal:2 exhibit:1 gradient:1 distance:3 entity:2 capacity:1 carbonell:1 considers:1 extent:3 induction:1 assuming:2 index:2 insufficient:1 convincing:1 gbf:1 difficult:3 october:1 memo:1 steel:7 design:1 perform:1 darken:3 descent:1 extended:1 incorporated:3 hinton:1 overcoming:1 pair:4 required:1 specified:1 optimized:2 hanson:1 zld:2 address:1 suggested:2 dynamical:1 pattern:1 scott:2 xm:4 reliable:1 explanation:2 hyperspheres:1 hot:4 natural:1 force:9 predicting:1 scarce:2 lk:2 created:1 schmid:1 tresp:6 prior:11 acknowledgement:1 determining:1 relative:3 prototypical:1 suggestion:1 proportional:1 proven:1 localized:5 validation:2 foundation:1 degree:2 sufficient:1 metal:1 principle:1 ata:1 deutschen:1 supported:1 soon:1 bias:3 rior:1 shavlik:1 neighbor:1 sparse:1 edinburgh:2 distributed:1 overcome:1 dimension:2 tolerance:4 world:1 valid:1 feedback:1 slice:1 domainspecific:1 forward:1 adaptive:2 reinforcement:1 avoided:1 author:1 made:1 far:2 san:2 approximate:3 emphasize:1 unreliable:1 keep:1 instantiation:2 mem:2 assumed:2 xi:4 spectrum:1 decade:1 table:6 promising:1 nature:1 operational:1 improving:1 complex:1 necessarily:1 artificially:1 domain:23 whole:1 noise:7 ny:1 wiley:1 precision:4 position:1 originated:2 explicit:1 guiding:1 bergadano:2 weighting:1 ix:1 theorem:3 specific:6 ton:1 derives:1 incorporating:3 studienstiftung:1 kedar:1 adding:3 ci:16 locality:1 smoothly:1 mill:13 led:1 simply:1 likely:1 ch:3 corresponds:1 relies:1 month:1 presentation:2 feasible:3 change:2 hard:1 considerable:1 determined:2 included:1 ilcnn:1 ithis:1 total:1 experimental:2 premium:1 siemens:2 relevance:1 scarcity:1 dept:2 tested:1 correlated:1 |
3,833 | 4,470 | Im2Text: Describing Images Using 1 Million
Captioned Photographs
Vicente Ordonez
Girish Kulkarni
Tamara L Berg
Stony Brook University
Stony Brook, NY 11794
{vordonezroma or tlberg}@cs.stonybrook.edu
Abstract
We develop and demonstrate automatic image description methods using a large
captioned photo collection. One contribution is our technique for the automatic
collection of this new dataset ? performing a huge number of Flickr queries and
then filtering the noisy results down to 1 million images with associated visually
relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric
methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to
produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning.
1
Introduction
Producing a relevant and accurate caption for an arbitrary image is an extremely challenging problem, perhaps nearly as difficult as the underlying general image understanding task. However, there
are already many images with relevant associated descriptive text available in the noisy vastness of
the web. The key is to find the right images and make use of them in the right way! In this paper,
we present a method to effectively skim the top of the image understanding problem to caption photographs by collecting and utilizing the large body of images on the internet with associated visually
descriptive text. We follow in the footsteps of past work on internet vision that has demonstrated
that big data can often make big problems ? e.g. image localization [13], retrieving photos with
specific content [27], or image parsing [26] ? much more bite size and amenable to very simple nonparametric matching methods. In our case, with a large captioned photo collection we can create an
image description surprisingly well even with basic global image representations for retrieval and
caption transfer. In addition, we show that it is possible to make use of large numbers of state of the
art, but fairly noisy estimates of image content to produce more pleasing and relevant results.
People communicate through language, whether written or spoken. They often use this language to
describe the visual world around them. Studying collections of existing natural image descriptions
and how to compose descriptions for novel queries will help advance progress toward more complex human recognition goals, such as how to tell the story behind an image. These goals include
determining what content people judge to be most important in images and what factors they use
to construct natural language to describe imagery. For example, when given a picture like that on
the top row, middle column of figure 1, the user describes the girl, the dog, and their location, but
selectively chooses not to describe the surrounding foliage and hut.
This link between visual importance and descriptions leads naturally to the problem of text summarization in natural language processing (NLP). In text summarization, the goal is to select or
generate a summary for a document. Some of the most common and effective methods proposed for
summarization rely on extractive summarization [25, 22, 28, 19, 23]. where the most important or
1
Man sits in a rusted car buried in the
sand on Waitarere beach
Little girl and her dog in northern
Thailand. They both seemed
interested in what we were doing
Interior design of modern white and
brown living room furniture against white
wall with a lamp hanging.
Emma in her hat looking super cute
Figure 1: SBU Captioned Photo Dataset: Photographs with user-associated captions from our
web-scale captioned photo collection. We collect a large number of photos from Flickr and filter
them to produce a data collection containing over 1 million well captioned pictures.
relevant sentence (or sentences) is selected from a document to serve as the document?s summary.
Often a variety of features related to document content [23], surface [25], events [19] or feature combinations [28] are used in the selection process to produce sentences that reflect the most significant
concepts in the document.
In our photo captioning problem, we would like to generate a caption for a query picture that summarizes the salient image content. We do this by considering a large relevant document set constructed
from related image captions and then use extractive methods to select the best caption(s) for the
image. In this way we implicitly make use of human judgments of content importance during description generation, by directly transferring human made annotations from one image to another.
This paper presents two extractive approaches for image description generation. The first uses global
image representations to select relevant captions (Sec 3). The second incorporates features derived
from noisy estimates of image content (Sec 5). Of course, the first requirement for any extractive
method is a document from which to extract. Therefore, to enable our approach we build a webscale collection of images with associated descriptions (ie captions) to serve as our document for
relevant caption extraction. A key factor to making such a collection effective is to filter it so that
descriptions are likely to refer to visual content. Some small collections of captioned images have
been created by hand in the past. The UIUC Pascal Sentence data set1 contains 1k images each of
which is associated with 5 human generated descriptions. The ImageClef2 image retrieval challenge
contains 10k images with associated human descriptions. However neither of these collections is
large enough to facilitate reasonable image based matching necessary for our goals, as demonstrated
by our experiments on captioning with varying collection size (Sec 3). In addition this is the first ?
to our knowledge ? attempt to mine the internet for general captioned images on a web scale!
In summary, our contributions are:
? A large novel data set containing images from the web with associated captions written by
people, filtered so that the descriptions are likely to refer to visual content.
? A description generation method that utilizes global image representations to retrieve and
transfer captions from our data set to a query image.
? A description generation method that utilizes both global representations and direct estimates of image content (objects, actions, stuff, attributes, and scenes) to produce relevant
image descriptions.
1.1
Related Work
Studying the association between words with pictures has been explored in a variety of tasks, including: labeling faces in news photographs with associated captions [2], finding a correspondence
between keywords and image regions [1, 6], or for moving beyond objects to mid-level recognition
elements such as attribute [16, 8, 17, 12].
Image description generation in particular has been studied in a few recent papers [9, 11, 15, 30].
Kulkarni et al [15] generate descriptions from scratch based on detected object, attribute, and prepositional relationships. This results in descriptions for images that are usually closely related to image
content, but that are also often quite verbose and non-humanlike. Yao et al [30] look at the problem
1
2
http://vision.cs.uiuc.edu/pascal-sentences/
http://www.imageclef.org/2011
2
Gist + Tiny images ranking
Extract High Level Information
Query image
Top re-ranked images
Top associated captions
Across the street from Yannicks
apartment. At night the
headlight on the handlebars
above the door lights up.
The building in which I live. My
window is on the right on the
4th floor
This is the car I was in after they
had removed the roof and
successfully removed me to the
ambulance.
Query Image
Matched Images &
extracted content
I really like doors. I took this
photo out of the car window
while driving by a church in
Pennsylvania.
Figure 2: System flow: 1) Input query image, 2) Candidate matched images retrieved from our webscale captioned collection using global image representations, 3) High level information is extracted
about image content including objects, attributes, actions, people, stuff, scenes, and tfidf weighting,
4) Images are re-ranked by combining all content estimates, 5) Top 4 resulting captions.
of generating text using various hierarchical knowledge ontologies and with a human in the loop
for image parsing (except in specialized circumstances). Feng and Lapata [11] generate captions
for images using extractive and abstractive generation methods, but assume relevant documents are
provided as input, whereas our generation method requires only an image as input.
A recent approach from Farhadi et al [9] is the most relevant to ours. In this work the authors
produce image descriptions via a retrieval method, by translating both images and text descriptions
to a shared meaning space represented by a single < object, action, scene > tuple. A description
for a query image is produced by retrieving whole image descriptions via this meaning space from
a set of image descriptions (the UIUC Pascal Sentence data set). This results in descriptions that are
very human ? since they were written by humans ? but which may not be relevant to the specific
image content. This limited relevancy often occurs because of problems of sparsity, both in the data
collection ? 1000 images is too few to guarantee similar image matches ? and in the representation
? only a few categories for 3 types of image content are considered.
In contrast, we attack the caption generation problem for much more general images (images found
via thousands of Flickr queries compared to 1000 images from Pascal) and a larger set of object
categories (89 vs 20). In addition to extending the object category list considered, we also include
a wider variety of image content aspects, including: non-part based stuff categories, attributes of
objects, person specific action models, and a larger number of common scene classes. We also
generate our descriptions via an extractive method with access to much larger and more general set
of captioned photographs from the web (1 million vs 1 thousand).
2
Overview & Data Collection
Our captioning system proceeds as follows (see fig 2 for illustration): 1) a query image is input to
the captioning system, 2) Candidate match images are retrieved from our web-scale collection of
captioned photographs using global image descriptors, 3) High level information related to image
content, e.g. objects, scenes, etc, is extracted, 4) Images in the match set are re-ranked based on
image content, 5) The best caption(s) is returned for the query. Captions can also be generated after
step 2 from descriptions associated with top globally matched images.
In the rest of the paper, we describe collecting a web-scale data set of captioned images from the
internet (Sec 2.1), caption generation using a global representation (Sec 3), content estimation for
various content types (Sec 4), and finally present an extension to our generation method that incorporates content estimates (Sec 5).
2.1 Building a Web-Scale Captioned Collection
One key contribution of our paper is a novel web-scale database of photographs with associated
descriptive text. To enable effective captioning of novel images, this database must be good in two
ways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The
captions associated with the data base photographs must be visually relevant so that transferring
captions between pictures is useful. To achieve the first requirement we query Flickr using a huge
number of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very
large, but noisy initial set of photographs with associated text. To achieve our second requirement
3
Query
?Image
?
1k
?matches
?
10k
?matches
?
100k
?matches
?
1million
?matches
?
Figure 3: Size Matters: Example matches to a query image for varying data set sizes.
we filter this set of photos so that the descriptions attached to a picture are relevant and visually
descriptive. To encourage visual descriptiveness in our collection, we select only those images
with descriptions of satisfactory length based on observed lengths in visual descriptions. We also
enforce that retained descriptions contain at least 2 words belonging to our term lists and at least one
prepositional word, e.g. ?on?, ?under? which often indicate visible spatial relationships.
This results in a final collection of over 1 million images with associated text descriptions ? the
SBU Captioned Photo Dataset. These text descriptions generally function in a similar manner to
image captions, and usually directly refer to some aspects of the visual image content (see fig 1 for
examples). Hereafter, we will refer to this web based collection of captioned images as C.
Query Set: We randomly sample 500 images from our collection for evaluation of our generation
methods (exs are shown in fig 1). As is usually the case with web photos, the photos in this set
display a wide range of difficulty for visual recognition algorithms and captioning, from images that
depict scenes (e.g. beaches), to images with a relatively simple depictions (e.g. a horse in a field),
to images with much more complex depictions (e.g. a boy handing out food to a group of people).
3
Global Description Generation
Internet vision papers have demonstrated that if your data set is large enough, some very challenging
problems can be attacked with very simple matching methods [13, 27, 26]. In this spirit, we harness
the power of web photo collections in a non-parametric approach. Given a query image, Iq , our goal
is to generate a relevant description. We achieve this by computing the global similarity of a query
image to our large web-collection of captioned images, C. We find the closest matching image (or
images) and simply transfer over the description from the matching image to the query image. We
also collect the 100 most similar images to a query ? our matched set of images Im ? M ? for use
in our our content based description generation method (Sec 5).
For image comparison we utilize two image descriptors. The first descriptor is the well known
gist feature, a global image descriptor related to perceptual dimensions ? naturalness, roughness,
ruggedness etc ? of scenes. The second descriptor is also a global image descriptor, computed by
resizing the image into a ?tiny image?, essentially a thumbnail of size 32x32. This helps us match
not only scene structure, but also the overall color of images. To find visually relevant images we
compute the similarity of the query image to images in C using a sum of gist similarity and tiny
image color similarity (equally weighted).
Results ? Size Matters! Our global caption generation method is illustrated in the first 2 panes
and the first 2 resulting captions of Fig 2. This simple method often performs surprisingly well.
As reflected in past work [13, 27] image retrieval from small collections often produces spurious
matches. This can be seen in Fig 3 where increasing data set size has a significant effect on the
quality of retrieved global matches. Quantitative results also reflect this (see Table 1).
4
Image Content Estimation
Given an initial matched set of images Im ? M based on global descriptor similarity, we would like
to re-rank the selected captions by incorporating estimates of image content. For a query image, Iq
and images in its matched set we extract and compare 5 kinds of image content:
? Objects (e.g. cats or hats), with shape, attributes, and actions ? sec 4.1
? Stuff (e.g. grass or water) ? sec 4.2
4
? People (e.g. man), with actions ? sec 4.3
? Scenes (e.g. pasture or kitchen) ? sec 4.4
? TFIDF weights (text or detector based) ? sec 4.5
Each type of content is used to compute the similarity between matched images (and captions) and
the query image. We then rank the matched images (and captions) according to each content measure
and combine their results into an overall relevancy ranking (Sec 5).
4.1 Objects
Detection & Actions: Object detection methods have improved significantly in the last few years,
demonstrating reasonable performance for a small number of object categories [7], or as a mid-level
representation for scene recognition [20]. Running detectors on general web images however, still
produces quite noisy results, usually in the form of a large number of false positive detections. As
the number of object detectors increases this becomes even more of an obstacle to content prediction.
However, we propose that if we have some prior knowledge about the content of an image, then we
can utilize even these imperfect detectors. In our web collection, C, there are strong indicators of
content in the form of caption words ? if an object is described in the text associated with an image
then it is likely to be depicted. Therefore, for the images, Im ? M , in our matched set we run only
those detectors for objects (or stuff) that are mentioned in the associated caption. In addition, we
also include synonyms and hyponyms for better content coverage, e.g. ?dalmatian? triggers ?dog?
detector. This produces pleasingly accurate detection results. For a query image we can essentially
perform detection verification against the relatively clean matched image detections.
Specifically, we use mixture of multi-scale deformable part detectors [10] to detect a wide variety of
objects ? 89 object categories selected to cover a reasonable range of common objects. These categories include the 20 Pascal categories, 49 of the most common object categories with reasonably
effective detectors from Object Bank [20], and 20 additional common object categories.
For the 8 animate object categories in our list (e.g. cat, cow, duck) we find that detection performance
can be improved significantly by training action specific detectors, for example ?dog sitting? vs
?dog running?. This also aids similarity computation between a query and a matched image because
objects can be matched at an action level. Our object action detectors are trained using the standard
object detector with pose specific training data.
Representation: We represent and compare object detections using 2 kinds of features, shape and
appearance. To represent object shape we use a histogram of HoG [4] visual words, computed at
intervals of 8 pixels and quantized into 1000 visual words. These are accumulated into a spatial
pyramid histogram [18]. We also use an attribute representation to characterize object appearance.
We use the attribute list from our previous work [15] which cover 21 visual aspects describing color
(e.g. blue), texture (e.g. striped), material (e.g. wooden), general appearance (e.g. rusty), and
shape (e.g. rectangular). Training images for the attribute classifiers come from Flickr, Google, the
attribute dataset provided by Farhadi et al [8], and ImageNet [5]. An RBF kernel SVM is used to
learn a classifier for each attribute term. Then appearance characteristics are represented as a vector
of attribute responses to allow for generalization.
If we have detected an object category, c, in a query image window, Oq and a matched image
window, Om , then we compute the probability of an object match as:
P (Oq , Om ) = e?Do (Oq ,Om )
where Do (Oq , Om ) is the Euclidean distance between the object (shape or attribute) vector in the
query detection window and the matched detection window.
4.2 Stuff
In addition to objects, people often describe the stuff present in images, e.g. ?grass?. Because these
categories are more amorphous and do not display defined parts, we use a region based classification
method for detection. We train linear SVMs on the low level region features of [8] and histograms
of Geometric Context output probability maps [14] to recognize: sky, road, building, tree, water,
and grass stuff categories. While the low level features are useful for discriminating stuff by their
appearance, the scene layout maps introduce a soft preference for certain spatial locations dependent
on stuff type. Training images and bounding boxes are taken from ImageNet and evaluated at test
time on a coarsely sampled grid of overlapping square regions over whole images. Pixels in any
5
Amazing colours in the sky
at sunset with the orange
of the cloud and the blue
of the sky behind.
A female mallard duck in the lake at Fresh fruit and
Luukki Espoo
vegetables at the
market in Port Louis
Mauritius.
One monkey on the tree in the
Ourika Valley Morocco
Clock tower
against the sky.
The river running through town I
cross over this to get to the train
Street dog in Lijiang
Tree with red leaves in the field in autumn.
The sun was coming through
Strange cloud formation literally
flowing through the sky like a river in the trees while I was sitting in
relation to the other clouds out there.
my chair by the river
Figure 4: Results: Some good captions selected by our system for query images.
region with a classification probability above a fixed threshold are treated as detections, and the max
probability for a region is used as the potential value.
If we have detected a stuff category, s, in a query image region, Sq and a matched image region, Sm ,
then we compute the probability of a stuff match as:
P (Sq , Sm ) = P (Sq = s) ? P (Sm = s)
where P (Sq = s) is the SVM probability of the stuff region detection in the query image and
P (Sm = s) is the SVM probability of the stuff region detection in the matched image.
4.3 People & Actions
People often take pictures of people, making ?person? the most commonly depicted object category
in captioned images. We utilize effective recent work on pedestrian detectors to detect and represent
people in our images. In particular, we make use of detectors from Bourdev et al [3] which learn
poselets ? parts that are tightly clustered in configuration and appearance space ? from a large number of 2d annotated regions on person images in a max-margin framework. To represent activities,
we use follow on work from Maji et al [21] which classifies actions using a the poselet activation
vector. This has been shown to produce accurate activity classifiers for the 9 actions in the PASCAL
VOC 2010 static image action classification challenge [7]. We use the outputs of these 9 classifiers
as our action representation vector, to allow for generalization to other similar activities.
If we have detected a person, Pq , in a query image, and a person Pm in a matched image, we compute
the probability that the people share the same action (pose) as:
P (Pq , Pm ) = e?Dp (Pq ,Pm )
where Dp (Pq , Pm ) is the Euclidean distance between the person action vector in the query detection
and the person action vector in the matched detection.
4.4 Scenes
The last commonly described kind of image content relates to the general scene where an image was
captured. This often occurs when examining captioned photographs of vacation snapshots or general
outdoor settings, e.g. ?my dog at the beach?. To recognize scene types we train discriminative multikernel classifiers using the large-scale SUN scene recognition data base and code [29]. We select
23 common scene categories for our representation, including indoor (e.g. kitchen) outdoor (e.g.
beach), manmade (e.g. highway), and natural (pasture) settings. Again here we represent the scene
descriptor as a vector of scene responses for generalization.
If a scene location, Lm , is mentioned in a matched image, then we compare the scene representation
between our matched image and our query image, Lq as:
P (Lq , Lm ) = e?Dl (Lq ,Lm )
where Dl (Lq , Lm ) is the Euclidean distance between the scene vector computed on the query image
and the scene vector computed on the matched image.
6
check out the face on the kid in the The tower is the
highest building in
black hat he looks so enthused
Hong Kong.
water under the bridge
the water the boat was in
girl in a box that is a train
walking the dog in the primeval
forest
small dog in the grass
shadows in the blue sky
I tried to cross the street to get in my
car but you can see that I failed LOL.
Figure 5: Funny Results: Some particularly funny or poetic results.
4.5
TFIDF Measures
For a query image, Iq , we wish to select the best caption from the matched set, Im ? M . For all of
the content measures described so far, we have computed the similarity of the query image content
to the content of each matched image independently. We would also like to use information from
the entire matched set of images and associated captions to predict importance. To reflect this, we
calculate TFIDF on our matched sets. This is computed as usual as a product of term frequency (tf)
and inverse document frequency (idf). We calculate this weighting both in the standard sense for
matched caption document words and for detection category frequencies (to compensate for more
prolific object detectors).
ni,j
|D|
tf idf = P
? log
|j : ti ? dj |
k nk,j
We define our matched set of captions (images for detector based tfidf) to be our document, j and
compute the tfidf score where ni,j represents the frequency of term i in the matched set of captions
(number of detections for detector based tfidf). The inverse document frequency is computed as
the log of the number of documents |D| divided by the number of documents containing the term i
(documents with detections of type i for detector based tfidf).
5
Content Based Description Generation
For a query image, Iq , with global descriptor based matched images, Im ? M , we want to rerank the matched images according to the similarity of their content to the query. We perform this
re-ranking individually for each of our content measures: object shape, object attributes, people
actions, stuff classification, and scene type (Sec 4). We then combine these individual rankings into
a final combined ranking in two ways. The first method trains a linear regression model of feature
ranks against BLEU scores. The second method divides our training set into two classes, positive
images consisting of the top 50% of the training set by BLEU score, and negative images from the
bottom 50%. A linear SVM is trained on this data with feature ranks as input. For both methods we
perform 5 fold cross validation with a split of 400 training images and 100 test images to get average
performance and standard deviation. For a novel query image, we return the captions from the top
ranked image(s) as our result.
For an example matched caption like ?The little boy sat in the grass with a ball?, several types of
content will be used to score the goodness of the caption. This will be computed based on words
in the caption for which we have trained content models. For example, for the word ?ball? both the
object shape and attributes will be used to compute the best similarity between a ball detection in the
query image and a ball detection in the matched image. For the word ?boy? an action descriptor will
be used to compare the activity in which the boy is occupied between the query and the matched
image. For the word ?grass? stuff classifications will be used to compare detections between the
query and the matched image. For each word in the caption tfidf overlap (sum of tfidf scores for
the caption) is also used as well as detector based tfidf for those words referring to objects. In the
event that multiple objects (or stuff, people or scenes) are mentioned in a matched image caption the
7
object (or stuff, people, or scene) based similarity measures will be a sum over the set of described
terms. For the case where a matched image caption contains a word, but there is no corresponding
detection in the query image, the similarity is not incorporated.
Results & Evaluation: Our content based captioning method often produces reasonable results (exs
are shown in Fig 4). Usually results describe the main subject of the photograph (e.g. ?Street dog
in Lijiang?, ?One monkey on the tree in the Ourika Valley Morocco?). Sometimes they describe
the depiction extremely well (e.g. ?Strange cloud formation literally flowing through the sky like a
river...?, ?Clock tower against the sky?). Sometimes we even produce good descriptions of attributes
(e.g. ?Tree with red leaves in the field in autumn?). Other captions can be quite poetic (Fig 5) ? a
picture of a derelict boat captioned ?The water the boat was in?, a picture of monstrous tree roots
captioned ?Walking the dog in the primeval forest?. Other times the results are quite funny. A
picture of a flimsy wooden structure says, ?The tower is the highest building in Hong Kong?. Once
in awhile they are spookily apropos. A picture of a boy in a black bandana is described as ?Check
out the face on the kid in the black hat. He looks so enthused.? ? and he doesn?t.
We also perform two quantitative evaluations. Several methods have been proposed to evaluate
captioning [15, 9], including direct user ratings of relevance and BLEU score [24]. User rating tends
to suffer from user variance as ratings are inherently subjective. The BLEU score on the other hand
provides a simple objective measure based on n-gram precision. As noted in past work [15], BLEU
is perhaps not an ideal measure due to large variance in human descriptions (human-human BLEU
scores hover around 0.5 [15]). Nevertheless, we report it for comparison.
Method
Global Matching (1k)
Global Matching (10k)
Global Matching (100k)
Global Matching (1million)
Global + Content Matching (linear regression)
Global + Content Matching (linear SVM)
BLEU
0.0774 +- 0.0059
0.0909 +- 0.0070
0.0917 +- 0.0101
0.1177 +- 0.0099
0.1215 +- 0.0071
0.1259 +- 0.0060
Table 1: Automatic Evaluation: BLEU score measured at 1
As can be seen in Table 1 data set size has a significant effect on BLEU score; more data provides
more similar and relevant matched images (and captions). Local content matching also improves
BLEU score somewhat over purely global matching.
In addition, we propose a new evaluation task where a user is presented with two photographs and
one caption. The user must assign the caption to the most relevant image (care is taken to remove
biases due to placement). For evaluation we use a query image and caption generated by our method.
The other image in the evaluation task is selected at random from the web-collection. This provides
an objective and useful measure to predict caption relevance. As a sanity check of our evaluation
measure we also evaluate how well a user can discriminate between the original ground truth image
that a caption was written about and a random image. We perform this evaluation on 100 images
from our web-collection using Amazon?s mechanical turk service, and find that users are able to
select the ground truth image 96% of the time. This demonstrates that the task is reasonable and that
descriptions from our collection tend to be fairly visually specific and relevant. Considering the top
retrieved caption produced by our final method ? global plus local content matching with a linear
SVM classifier ? we find that users are able to select the correct image 66.7% of the time. Because
the top caption is not always visually relevant to the query image even when the method is capturing
some information, we also perform an evaluation considering the top 4 captions produced by our
method. In this case, the best caption out of the top 4 is correctly selected 92.7% of the time. This
demonstrates the strength of our content based method to produce relevant captions for images.
6
Conclusion
We have described an effective caption generation method for general web images. This method
relies on collecting and filtering a large data set of images from the internet to produce a novel webscale captioned photo collection. We present two variations on our approach, one that uses only
global image descriptors to compose captions, and one that incorporates estimates of image content
for caption generation.
8
References
[1] K. Barnard, P. Duygulu, N. de Freitas, D. Forsyth, D. Blei, and M. Jordan. Matching words and pictures.
Journal of Machine Learning Research, 3:1107?1135, 2003.
[2] T. Berg, A. Berg, J. Edwards, M. Maire, R. White, E. Learned-Miller, Y. Teh, and D. Forsyth. Names and
faces. In CVPR, 2004.
[3] L. Bourdev, S. Maji, T. Brox, and J. Malik. Detecting people using mutually consistent poselet activations.
In ECCV, 2010.
[4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
Image Database. In CVPR, 2009.
[6] P. Duygulu, K. Barnard, N. de Freitas, and D. Forsyth. Object recognition as machine translation. In
ECCV, 2002.
[7] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The
PASCAL Visual Object Classes Challenge 2010 (VOC2010) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2010/workshop/index.html.
[8] A. Farhadi, I. Endres, D. Hoiem, and D. A. Forsyth. Describing objects by their attributes. In CVPR,
2009.
[9] A. Farhadi, M. Hejrati, A. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. A. Forsyth. Every
picture tells a story: generating sentences for images. In ECCV, 2010.
[10] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Discriminatively trained deformable part models,
release 4. http://people.cs.uchicago.edu/ pff/latent-release4/.
[11] Y. Feng and M. Lapata. How many words is a picture worth? automatic caption generation for news
images. In Proc. of the Assoc. for Computational Linguistics, ACL ?10, pages 1239?1249, 2010.
[12] V. Ferrari and A. Zisserman. Learning visual attributes. In NIPS, 2007.
[13] J. Hays and A. A. Efros. im2gps: estimating geographic information from a single image. In Proceedings
of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2008.
[14] D. Hoiem, A. A. Efros, and M. Hebert. Recovering surface layout from an image. Int. J. Comput. Vision,
75:151?172, October 2007.
[15] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg. Babytalk: Understanding
and generating simple image descriptions. In CVPR, 2011.
[16] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and simile classifiers for face verification. In ICCV, 2009.
[17] C. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class
attribute transfer. In CVPR, 2009.
[18] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching. In CVPR,
June 2006.
[19] W. Li, W. Xu, M. Wu, C. Yuan, and Q. Lu. Extractive summarization using inter- and intra- event
relevance. In Int Conf on Computational Linguistics, 2006.
[20] E. P. X. Li-Jia Li, Hao Su and L. Fei-Fei. Object bank: A high-level image representation for scene
classification and semantic feature sparsification. In Neural Information Processing Systems (NIPS),
Vancouver, Canada, December 2010.
[21] S. Maji, L. Bourdev, and J. Malik. Action recognition from a distributed representation of pose and
appearance. In CVPR, 2011.
[22] R. Mihalcea. Language independent extractive summarization. In National Conference on Artificial
Intelligence, pages 1688?1689, 2005.
[23] A. Nenkova, L. Vanderwende, and K. McKeown. A compositional context sensitive multi-document
summarizer: exploring the factors that influence summarization. In SIGIR, 2006.
[24] K. Papineni, S. Roukos, T. Ward, and W. jing Zhu. Bleu: a method for automatic evaluation of machine
translation. pages 311?318, 2002.
[25] D. R. Radev and T. Allison. Mead - a platform for multidocument multilingual text summarization. In Int
Conf on Language Resources and Evaluation, 2004.
[26] J. Tighe and S. Lazebnik. Superparsing: Scalable nonparametric image parsing with superpixels. In
ECCV, 2010.
[27] A. Torralba, R. Fergus, and W. Freeman. 80 million tiny images: a large dataset for non-parametric object
and scene recognition. PAMI, 30, 2008.
[28] K.-F. Wong, M. Wu, and W. Li. Extractive summarization using supervised and semi-supervised learning.
In International Conference on Computational Linguistics, pages 985?992, 2008.
[29] J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from
abbey to zoo. In CVPR, 2010.
[30] B. Yao, X. Yang, L. Lin, M. W. Lee, and S.-C. Zhu. I2t: Image parsing to text description. Proc. IEEE,
98(8), 2010.
9
| 4470 |@word kong:2 middle:1 dalal:1 everingham:1 triggs:1 relevancy:2 tried:1 hyponym:1 initial:2 configuration:1 contains:3 score:11 hereafter:1 hoiem:2 document:17 ours:1 past:4 existing:1 subjective:1 freitas:2 activation:2 stony:2 written:4 parsing:4 must:4 visible:1 shape:7 remove:1 gist:3 depict:1 v:3 grass:6 intelligence:1 selected:6 leaf:2 lamp:1 filtered:1 blei:1 stonybrook:1 quantized:1 provides:3 location:3 sits:1 attack:1 org:2 preference:1 detecting:1 constructed:1 direct:2 retrieving:2 yuan:1 compose:2 combine:2 emma:1 manner:1 introduce:2 inter:1 market:1 ontology:1 uiuc:3 multi:2 freeman:1 globally:1 voc:2 food:1 little:2 window:6 considering:3 farhadi:4 increasing:1 provided:2 becomes:1 underlying:1 matched:37 classifies:1 pasture:2 estimating:1 what:3 multikernel:1 kind:3 monkey:2 spoken:1 finding:1 sparsification:1 hejrati:1 guarantee:1 autumn:2 quantitative:2 sky:8 collecting:3 stuff:19 ti:1 every:1 classifier:7 demonstrates:2 assoc:1 producing:1 louis:1 humanlike:1 positive:2 service:1 local:2 tends:1 mead:1 pami:1 black:3 plus:1 acl:1 studied:1 collect:2 challenging:3 limited:1 range:2 harmeling:1 handlebar:1 sq:4 maire:1 mihalcea:1 significantly:2 matching:16 word:16 road:1 get:3 interior:1 selection:1 valley:2 context:2 live:1 influence:1 wong:1 www:2 map:2 demonstrated:3 layout:2 williams:1 independently:1 rectangular:1 sigir:1 amazon:1 x32:1 apropos:1 utilizing:1 retrieve:1 ferrari:1 variation:1 i2t:1 trigger:1 user:10 caption:61 us:2 premraj:1 element:1 recognition:10 particularly:1 walking:2 database:4 sunset:1 observed:1 cloud:4 bottom:1 apartment:1 thousand:2 region:11 calculate:2 news:2 sun:3 removed:2 highest:2 mentioned:3 mine:1 trained:4 animate:1 serve:2 localization:1 purely:1 girl:3 voc2010:2 various:2 represented:2 cat:2 maji:3 surrounding:1 train:5 effective:7 describe:7 query:46 detected:4 tell:2 labeling:1 horse:1 formation:2 artificial:1 sanity:1 quite:4 larger:3 cvpr:10 say:1 resizing:1 ward:1 unseen:1 noisy:7 final:3 descriptive:4 took:1 propose:2 coming:1 product:1 hover:1 relevant:21 combining:1 loop:1 descriptiveness:1 achieve:3 deformable:2 papineni:1 rashtchian:1 description:43 requirement:3 extending:1 jing:1 produce:16 captioning:9 generating:3 mckeown:1 object:48 help:2 wider:1 develop:2 iq:4 pose:3 amazing:1 measured:1 bourdev:3 keywords:1 ex:2 progress:1 strong:1 edward:1 recovering:1 coverage:1 c:3 judge:1 indicate:1 extractive:9 poselets:1 foliage:1 shadow:1 come:1 closely:1 annotated:1 attribute:21 filter:3 manmade:1 correct:1 human:12 enable:2 mcallester:1 translating:1 material:1 sand:1 assign:1 generalization:3 wall:1 really:1 clustered:1 tfidf:11 im:5 roughness:1 extension:1 exploring:1 around:2 considered:2 hut:1 ground:2 visually:7 predict:2 lm:4 driving:1 efros:2 torralba:2 abbey:1 estimation:2 proc:2 bag:1 bridge:1 highway:1 individually:1 sensitive:1 create:1 successfully:1 tf:2 weighted:1 always:1 super:1 occupied:1 varying:2 derived:1 kid:2 release:1 ponce:1 june:1 rank:4 check:3 superpixels:1 contrast:1 detect:3 sense:1 wooden:2 dependent:1 accumulated:1 entire:1 transferring:2 spurious:1 captioned:21 relation:1 footstep:1 buried:1 her:2 interested:1 pixel:2 overall:2 classification:6 html:1 pascal:7 espoo:1 art:2 spatial:4 fairly:3 orange:1 brox:1 field:3 once:1 construct:1 extraction:1 beach:4 represents:1 look:3 nearly:1 report:1 prolific:1 few:4 modern:1 randomly:1 oriented:1 recognize:2 tightly:1 roof:1 individual:1 national:1 kitchen:2 consisting:1 attempt:1 pleasing:2 detection:24 huge:2 intra:1 abstractive:1 evaluation:12 mixture:1 allison:1 light:1 behind:2 amenable:1 accurate:3 tuple:1 encourage:1 necessary:1 pleasingly:1 tree:7 literally:2 euclidean:3 divide:1 re:5 girshick:1 column:1 soft:1 obstacle:1 cover:2 goodness:1 babytalk:1 deviation:1 examining:1 too:1 characterize:1 endres:1 my:4 chooses:1 combined:1 person:7 referring:1 nickisch:1 river:4 discriminating:1 ie:1 international:1 lee:1 dong:1 yao:2 imagery:1 reflect:3 again:1 town:1 containing:3 conf:3 return:1 li:7 potential:1 de:2 lapata:2 sec:15 int:3 matter:2 pedestrian:1 verbose:1 forsyth:5 ranking:5 tighe:1 root:1 doing:1 red:2 annotation:1 jia:1 imageclef:1 contribution:3 om:4 amorphous:1 ni:2 square:1 descriptor:11 characteristic:1 variance:2 miller:1 judgment:1 sitting:2 produced:3 lu:1 zoo:1 worth:1 detector:18 flickr:5 against:5 frequency:5 tamara:1 turk:1 naturally:1 associated:18 static:1 sampled:1 dataset:5 knowledge:3 car:4 color:3 improves:1 supervised:2 follow:2 harness:1 reflected:1 improved:2 response:2 flowing:2 zisserman:2 evaluated:1 box:2 clock:2 hand:2 morocco:2 web:18 night:1 su:1 overlapping:1 google:1 radev:1 ordonez:1 perhaps:2 quality:1 facilitate:1 building:5 effect:2 brown:1 concept:1 contain:1 name:1 pascalnetwork:1 geographic:1 satisfactory:1 semantic:1 illustrated:1 white:3 during:1 noted:1 cute:1 hong:2 demonstrate:1 performs:1 image:196 meaning:2 lazebnik:2 novel:6 common:6 specialized:1 overview:1 attached:1 million:8 association:1 he:3 significant:3 refer:4 automatic:5 grid:1 pm:4 language:6 dj:1 pq:4 had:1 moving:1 access:1 lol:1 similarity:12 surface:2 depiction:3 etc:2 base:2 closest:1 recent:3 female:1 retrieved:4 certain:1 poselet:2 hay:2 seen:2 captured:1 additional:1 somewhat:1 floor:1 care:1 deng:1 belhumeur:1 living:1 semi:1 relates:1 multiple:1 match:14 cross:3 compensate:1 retrieval:4 lin:1 divided:1 equally:1 prediction:1 scalable:1 basic:1 regression:2 oliva:1 vision:5 circumstance:1 essentially:2 histogram:4 represent:5 kernel:1 sometimes:2 girish:1 pyramid:2 addition:6 whereas:1 want:1 interval:1 winn:1 rest:1 subject:1 headlight:1 tend:1 december:1 incorporates:3 flow:1 spirit:1 oq:4 jordan:1 yang:1 door:2 ideal:1 split:1 enough:2 variety:4 pennsylvania:1 cow:1 imperfect:1 whether:1 colour:1 suffer:1 returned:1 compositional:1 action:22 useful:3 generally:1 vegetable:1 nonparametric:2 thailand:1 mid:2 svms:1 category:18 generate:6 http:4 northern:1 thumbnail:1 correctly:1 platform:1 blue:3 coarsely:1 group:1 key:3 salient:1 demonstrating:1 threshold:1 nevertheless:1 neither:1 clean:1 utilize:3 dhar:1 sum:3 year:1 run:1 inverse:2 you:1 communicate:1 reasonable:5 strange:2 wu:2 utilizes:2 lake:1 funny:3 summarizes:1 set1:1 capturing:1 internet:6 furniture:1 display:2 correspondence:1 fold:1 activity:4 strength:1 placement:1 summarizer:1 idf:2 your:1 striped:1 scene:29 fei:4 vanderwende:1 aspect:3 extremely:3 chair:1 pane:1 performing:1 duygulu:2 kumar:1 simile:1 relatively:3 handing:1 hanging:1 according:2 combination:1 ball:4 belonging:1 describes:1 across:1 prepositional:2 making:2 hockenmaier:1 iccv:1 taken:2 resource:1 mutually:1 describing:3 photo:14 studying:2 available:1 naturalness:1 hierarchical:2 enforce:1 vacation:1 hat:4 original:1 top:12 running:3 include:4 nlp:1 linguistics:3 build:1 feng:2 objective:3 malik:2 already:1 occurs:2 parametric:3 usual:1 gradient:1 dp:2 distance:3 link:1 street:4 me:1 mallard:1 tower:4 toward:1 water:5 fresh:1 bleu:11 length:2 code:1 retained:1 relationship:2 illustration:1 index:1 difficult:1 october:1 boy:5 hog:1 hao:1 negative:1 design:1 summarization:9 perform:6 teh:1 im2text:1 snapshot:1 sm:4 attacked:1 looking:1 incorporated:1 arbitrary:1 canada:1 rating:3 dog:11 pair:1 mechanical:1 sentence:7 imagenet:3 dalmatian:1 learned:1 nip:2 brook:2 beyond:2 able:2 proceeds:1 usually:5 pattern:1 indoor:1 sparsity:1 challenge:4 bite:1 including:5 max:2 gool:1 power:1 event:3 overlap:1 natural:4 rely:1 ranked:4 difficulty:1 indicator:1 treated:1 boat:3 zhu:2 sadeghi:1 picture:14 created:1 church:1 extract:3 schmid:1 text:14 prior:1 understanding:3 geometric:1 vancouver:1 determining:1 rerank:1 discriminatively:1 generation:19 filtering:2 ruggedness:1 validation:1 verification:2 consistent:1 fruit:1 xiao:1 port:1 story:2 bank:2 tiny:4 share:1 roukos:1 translation:2 row:1 eccv:4 course:1 summary:3 surprisingly:3 last:2 hebert:1 bias:1 allow:2 uchicago:1 wide:2 face:5 felzenszwalb:1 van:1 distributed:1 dimension:1 world:1 gram:1 seemed:1 doesn:1 author:1 collection:29 made:1 commonly:2 far:1 implicitly:1 multilingual:1 global:24 sat:1 discriminative:1 fergus:1 latent:1 table:3 learn:2 transfer:4 reasonably:2 inherently:1 forest:2 complex:2 main:1 synonym:1 big:2 whole:2 bounding:1 lampert:1 body:1 xu:1 fig:7 ehinger:1 ny:1 aid:1 precision:1 duck:2 wish:1 lq:4 comput:1 candidate:2 outdoor:2 perceptual:1 weighting:2 young:1 down:1 choi:1 specific:6 explored:1 list:4 svm:6 dl:2 incorporating:2 socher:1 workshop:1 false:1 effectively:1 importance:3 texture:1 superparsing:1 margin:1 nk:1 pff:1 depicted:2 photograph:12 simply:1 likely:3 appearance:7 visual:13 failed:1 release4:1 sbu:2 poetic:2 truth:2 relies:1 extracted:3 goal:5 rbf:1 room:1 shared:1 man:2 content:50 barnard:2 vicente:1 specifically:1 except:1 discriminate:1 select:8 selectively:1 berg:6 people:17 relevance:3 kulkarni:3 skim:1 evaluate:2 scratch:1 nayar:1 |
3,834 | 4,471 | Analytical Results for the Error in Filtering of
Gaussian Processes
Alex Susemihl
Bernstein Center for Computational Neuroscience Berlin,Technische Universit?at Berlin
[email protected]
Ron Meir
Department of Eletrical Engineering, Technion, Haifa
[email protected]
Manfred Opper
Bernstein Center for Computational Neuroscience Berlin, Technische Universit?at Berlin
[email protected]
Abstract
Bayesian filtering of stochastic stimuli has received a great deal of attention recently. It has been applied to describe the way in which biological systems dynamically represent and make decisions about the environment. There have been
no exact results for the error in the biologically plausible setting of inference on
point process, however. We present an exact analysis of the evolution of the meansquared error in a state estimation task using Gaussian-tuned point processes as
sensors. This allows us to study the dynamics of the error of an optimal Bayesian
decoder, providing insights into the limits obtainable in this task. This is done for
Markovian and a class of non-Markovian Gaussian processes. We find that there
is an optimal tuning width for which the error is minimized. This leads to a characterization of the optimal encoding for the setting as a function of the statistics
of the stimulus, providing a mathematically sound primer for an ecological theory
of sensory processing.
1
Introduction
Biological systems are constantly interacting with a dynamic, noisy environment, which they can
only assess through noisy sensors. Models of Bayesian decision-making have been suggested to
account for the functioning of biological systems in many areas [1, 2]. Here, we concentrate on the
problem of Bayesian filtering of stochastic processes. There have been many studies on filtering
of stimuli by biological systems [1, 2, 3], however, there are very few analytical results regarding
the error of Bayesian filtering. We provide exact expressions for the evolution of the Mean Squared
Error (MSE) of Bayesian filtering for a class of Gaussian processes. Results for expected errors of
Gaussian processes had been sofar obtained only for the problem of smoothing, where predictions
are not online but have to be made using past and future observations [4, 5].
The present work seeks to give an account of the error properties in Bayesian filtering of stochastic
processes. We start by analysing the case of Markovian processes in section 2. We find a set of
filtering equations from which we can derive a differential equation for the expected mean squared
error. This provides a way to optimize the system parameters (the ?encoder?) in order to minimize
the error. We present an implicit equation to optimize the encoding scheme in the case of Poisson
spike observations. We also provide a full stochastic model of the evolution of the error, which can
1
be solved analytically in a given interval. Useful approximations for the distribution of the error are
also provided. In section 3 we show an application to optimal population coding in sensory neurons.
In section 4 we extend the same framework to higher order processes, where we can control the
smoothness by the order of the process. We finalize with a brief discussion. Our theoretical results
contribute to the ongoing research on ecological theories in biological signal processing (e.g., [6]),
which argue that performance of sensory systems can be enhanced by allowing sensors to adapt
to the statistics of the environment. While an increasing amount of biological evidence has been
accumulating for such theories (e.g., [7, 8, 9, 10, 11]) there has been little work providing exact
analytic demonstration of its utility so far.
2
Bayesian Filtering for the Ornstein-Uhlenbeck Process
Consider the problem of estimating a dynamically evolving state in continuous time based on partial
noisy observations. In classic approaches one assumes that the state is observed either continuously
or at discrete times, leading to the celebrated Kalman filter and its extensions. We are concerned here
with a setup of much interest in Neuroscience (as well as in Queueing theory) where the observations
take the form of a a set of point processes. More concretely, let X(t) be a stochastic process,
and let M ?sensory? processes be defined, each of which generates a Poisson point process with
a time-dependent rate function ?m (X(t), t), m = 1, 2, . . . , M . Such a stochastic process is often
referred to as a doubly stochastic point process. In a neuroscience context ?m (?) represents the
tuning function of the m?th sensory cell. In order to maintain analytic
tractability we focus
in this
work on a Gaussian form for ?m , given by ?m (X(t), t) = ? exp ?(X(t) ? ?m )2 /2?(t)2 , where
?m are the tuning function centers. We will assume the tuning function centers are equally spaced
with spacing ??, for simplicity, although this is not essential to our arguments.
Though the rate of observations for the individual processes depends on the instantaneous value of
the process, it can be shown that under certain assumptions the total rate of observations (the rate
by which observations by all processes are generated) is independent of the process. If we assume
that the processes are independent and assume that the probability of the stimulus falling outside the
range spanned by the tuning function centers is negligible, we obtain the total rate of observations
?
X
X
(X(t) ? ?m )2
2???(t)
?(t) ?
?m (X(t), t) = ?
exp ?
.
?
2
2? (t)
??
m
m
This approximation is discussed extensively in [12] and is seen to be very precise as long as ? is of
the same or of a larger order of magnitude as ??. Denoting the set of observations generated by the
sensory processes by ? = {(ti , mi , ?i )}1 , we have the probability of a given set of observations ?
given a stimulus history X[t0 ,t]
R tf
P R tf
Y
Y
?mi (X(ti ), ti ) = e? t0 ?(t)dt
?mi (X(ti ), ti ).
P (?|X[t0 ,t] ) = e? m t0 ?m (X(t),t)dt
i
i
This defines the likelihood of the observations. Note that without the independence of the total rate
from the stimulus, the likelihood would not be Gaussian due to the first term in the product. We
need to evaluate the posterior probability P (X(t)|?). We have
Z
P (X(t)|?) ? P (X(t))P (?|X(t)) = P (X(t)) d?(X[t0 ,t) )P (?|X[t0 ,t) )P (X[t0 ,t) |X(t)).
The equations involved are Gaussian and evaluating them we obtain the usual Gaussian process
regression equations (see [13] and [14, p. 17])
X
X
?1
?1
?(t, ?) =
K(t ? ti )Cij
?j , s(t, ?) = K(0) ?
K(t ? ti )Cij
K(tj ? t), 2
(1)
i,j
i,j
0
where K(t ? t ) is the auto-correlation function or kernel of the Gaussian process X(t). This
specifies the posterior distribution P (X(t)|?) = N (?(t, ?), s(t, ?)).
1
Here the time ti denotes the time of the i-th observation, mi gives the identity of the sensor making the
observation and ?i = ?mi is the mean of the Gaussian rate function.
2
Cij (?) = K(ti ? tj ) + ?ij ?(ti )2
2
Our object of interest is the average mean squared error of the Bayesian estimator at a time t based
on past observations. This is the minimal mean-squared error of the optimal Bayesian estimator
? ?) = hX(t)i
X(t;
X(t)|? with respect to a mean-squared error loss function. It is given by
D
E
D
E
? ?))2
M M SE(t) =
(X(t) ? X(t;
= (X(t) ? ?(t; ?))2 X(t)|? = hs(t; ?)i? .
X(t)|?
?
?
? ?) = ?(t, ?)
Here we have written the averaging in the reverse of the usual order and have used X(t,
in the second step. Note that the posterior variance is independent of the value of the observations,
depending solely on the observation times. However the exact result is still intractable due to both
the complex dependence of s(t, ?) on the observation times and the averaging over these. Note that
so far the results hold for all kinds of Gaussian processes.
If we make a Markov assumption about the structure of the kernel K(t ? t0 ) we are able to
make statements about the evolution of the posterior variance between observations. This allows
us to derive the differential Chapman-Kolmogorov equation [15] for the evolution of the posterior variance and then obtain the evolution of the MMSE. For the Ornstein-Uhlenbeck process
? 2 ??|? |
dX(t) = ??X(t)dt + ?dW (t) we have the kernel k(? ) = 2?
e
and the differential equation for the evolution of the posterior variance between observations (see [16, p. 40] for example)
ds(t)
= ?2?s(t) + ? 2 .
(2)
dt
When a new observation arrives, the distribution is updated through Bayes? rule. Using that
P (X(t)) = N (?(t), s(t)) and P (?i |X) ? N (?i ; X, ?2 (t)), one can see that
2
? (t)?(t) + s(t)?i ?2 (t)s(t)
,
.
(3)
P (X(t)|(t, ?i )) = N
?2 (t) + s(t)
?2 (t) + s(t)
Here, as before, the posterior variance is independent of the specific observation ?i , therefore we
need only concentrate on the times of observations for purposes of modeling the posterior variance.
The evolution of the posterior variance is a Markov process which is driven by a deterministic drift,
given in Eq. 2, and is also subject to discontinuous jumps at random times, which account for the
observations, described by Eq. 3. This continuous time stochastic process is defined by a transition
probability which in the time limit of infinitesimal time dt ? 0 is given by
?(t)2 s
0
0
2
0
P (s , t + dt|s, t) = (1 ? ?(t)dt)?(s ? s + dt(2?s ? ? )) + ?(t)dt? s ?
. (4)
?(t)2 + s
In the equation above, the first term accounts for the drift given in Eq. 2 and the second term accounts
for the jumps given by Eq. 3. Using (4), and following a standard approach described in Gardiner
[15, p. 47], we obtain a partial differential equation, the so-called differential Chapman-Kolmogorov
equation for the exact time evolution of the marginal probability density P (s, t)
2 2
?2
?P (s, t)
?
? s
(2?s ? ? 2 )P (s, t) + ?
=
P
,
t
? ?P (s, t).
(5)
?t
?s
?2 ? s
?2 ? s
This equation is, however, too complicated to be solved exactly in the general case. We can use it to
R
(s,t)
. For f (s) = s
derive the evolution of statistical averages by noting that dhfdt(s)i = dsf (s) ?P?t
we obtain an exact equation for the evolution of the average error. Writing = hsi, we have
d
= ?2? + ? 2 ? ?(t)
dt
2.1
s2
?2 (t) + s
.
(6)
P (s,t)
Mean field approximation
We will now derive a good closed form approximate equation for the expected posterior variance
= hsi from (6). Note that the expectation of the nonlinear function on the right hand side is again
intractable but can be approximated using a mean-field approximation of the type hf (s)i ? f (hsi).
We obtain
2mf
dmf
= ?2?mf + ? 2 ? ?(t)
.
(7)
dt
?(t)2 + mf
3
This approximation works remarkably well, giving an excellent account of the equilibrium regime
and of the relaxation of the error as can be seen in Fig. 2 for the case of population coding. We
can also minimize the change in at each time step with respect to the sensor parameters ?, ? to
find optimal values for them. The maximal observation rate ? is quite trivial, as an increase in ?
increases the effect of observations linearly. Therefore without a cost associated to observations,
there is no optimal value for ?, since increasing it will always lead to lower values of . Minimizing
the derivative of with respect to ? however, yields an implicit equation for the optimal value of
?(t)
,
s3
s2
2
?opt (t) =
(8)
(?opt (t)2 + s)2 P (s,t)
(?opt (t)2 + s)2 P (s,t)
Using again a mean-field approach, we obtain the simple result for the time-dependent tuning width
2
?opt
(t) = (t), so the square of the optimal tuning width is the average error of the current estimate
of the process. This is interesting as it accounts for sharpening of the gaussian rates when the error
is small and broadening when the error is large.
2.2
Exact results for the stationary distribution
We will now assume that both ? and ? are time independent so that the stochastic process converges
(s,t)
to a stationary state described by ?P?t
= 0. To obtain information about this stationary solution it
is useful to introduce the new variable z = ? 2 /(?s). The linear ODE 2 transforms into a nonlinear
one z(t)
?
= ?z(2 ? z). This slight complication comes with a great simplification for the jump
conditions. In the new variable this is simply z 0 = z + ? where ? = ? 2 /(??2 ) does not depend on z.
Hence the differential Chapman-Kolmogorov equation (specialised to the stationary state) is simply
?
d
[?z(2 ? z)P (z)] + ?P (z ? ?) ? ?P (z) = 0
dz
(9)
Viewing z as a temporal variable, we can treat Eq. 9 as a delay differential equation which depends
on p at previous values of z. If we knew P (z) in an interval z0 ? ? ? z < z0 , Eq. 9 would however
become a simple ordinary linear differential equation with a known inhomogeneity P (z ? ?) in the
interval z0 ? z ? z0 + ? which could be solved explicitly by numerical quadrature. Repeating this
procedure would allow us to obtain p(z) iteratively for all z > 0. A simple argument shows that
P (z) = 0 for z < 2. Since jumps can only increase z and since also z(t)
? > 0 for z < 2, we find that
in the stationary state, the interval 0 ? z < 2 will become depopulated. Hence, for 2 ? z ? 2 + ?
we have
d
? [?z(2 ? z)P (z)] = ?P (z)
dz
which is solved by P (z) ? z ?2 (1 ? 2/z)?1+?/2? Transforming back to the original error variable
s yields
?
Peq (s) ? (? 2 ? 2?s) 2? ?1 .
(10)
i
h 2 2
?2
? ?
valid for s ? 2??2 +?2 , 2? . This is a very interesting result, as it shows a diverging behaviour
in the equilibrium for values of ? < 2?. This singularity can also be verified in the simulations.
This solution gives us a good intuition about the coding properties of the system. When the average
time between observations ?obs = 1/? is smaller than the relaxation time of the process? variance
?var = 1/2?, the most probable value for the error will be the equilibrium variance of the observed
process ? 2 /2?. Note however that the expected error is always smaller than ? 2 /2?. When ? = 2?
we observe a transition and the most likely error becomes smaller. It was not possible to give
closed form analytical expressions for p(z) in the following intervals because the integrals are not
analytically tractable. We can, however, solve (10) numerically obtaining great agreement with the
simulated histograms. For very small values of ?, the numerical integration becomes less reliable,
as the valid intervals become increasingly small, requiring a very small integration step. This can be
seen in Fig. 1.
We can get asymptotic expressions for P (z) when parameters are such that the relative fluctuations
of z are small. This is expected to hold for small jumps ? (when the system is trivially almost
deterministic) and/or for large jump rates ?, when the density of jumps is so large that relative
fluctuations are small. Using again a simple mean field argument as before shows that in such
4
Figure 1: Comparison of the different regimes for the equilibrium distribution. Top left we can see ? = ? = 1.
Note that neither solutions cover all of the range of the distribution, although the exact solution captures the
behaviour very well in the low z region. Top right we can see the low ? regime. Note that the exact solution
accounts for the distribution on most of the range of the distribution. In the bottom we see the cases where the
Gaussian approximation excels. Both large ? and ? result in an approximately Gaussian distribution, as we
have derived above. The blue line (exact solution) is hardly discernible from the red line (histogram) in the
small ? case, as is the black line (Gaussian approximation) in the large ? or ? case.
p
situations we find that in equilibrium z should be close to z ? = 1 + 1 + ??/?. For both small ?
and/or large ?, for z close to z ? we have ? z ? and we can expand p(z ? ?) in a Taylor series to
second order in ?. Linearising also the drift ?z(2 ? z) around z ? yields a Fokker-Planck equation
which is equivalent to a simple diffusion
type) which is solved
process (of the Ornstein-Uhlenbeck
p
2
??
by the Gaussian density P (z) = N 1 + 1 + ??/?, ?
. In Fig. 1 we present the
(4?
1+??/?)
different approximations compared to the simulated histograms of the posterior variance.
We present results for
? the specific choice ? = ? = 1. Note however, that through a scaling of
parameters ?0 = ??/ ? and ?0 = ?? we can obtain the MMSE for any value of the four parameters
with the values for ? = ? = 1. In this way, rescaling the parameters, we can obtain the MMSE for
any values of ?, ?, ? and ?.
3
Optimal Population Coding
As an application we look into the problem of neural population coding of dynamic stimuli (see
[13]). We model the spiking of neurons as doubly stochastic Poisson processes driven by the stimulus X(t), that is the probability of a given neuron firing a spike in a given interval [t, t + dt] is given
by
?
(X(t)??m )2
2???(t)dt
?
2
2?(t)
dt, and Pt (spike|X(t)) ?
Pt (spikem |X(t)) = ?e
= ?(t)dt.
??
Under these assumptions, the inference from a spike train is equivalent to that on observations of
data, and the MMSE follows the differential Eq. 6. Again, the fact that the posterior variance
depends solely on the spike times allows us to substitute the spiking processes for each neuron with
one spiking process for the whole population, simplifying greatly our calculations. We compare the
framework derived with the dynamic population coding presented in [13] in Fig. 2.
We have calculated the MMSE for a range of values for ? and ? to obtain the dependence of the
MMSE on these parameters. In Fig. 3 we show the mean-field treatment of Eq. 6 as well as
simulations of the dynamics given by Eq. 4. The mean-field approximation works remarkably well,
yielding a relative error smaller than 2% throughout the range of parameters. The approximate
and simulated error maps are virtually indistinguishable. As can be seen in Fig. 2 the mean-field
approximation also works very well to reflect the dynamics of the error.
5
Figure 2: Neural coding of an second-order Markov process as described in the text. Top figure shows the
process overlayed with posterior mean and confidence intervals. The bottom plot shows the posterior variance
of one sample run in black, the average over a thousand runs in blue and the mean-field dynamics in red. Code
modified from [13]
Figure 3: MMSE for the Ornstein-Uhlenbeck process. On the left we have the average MMSE obtained by the
simulation and on the right the value of the MMSE as a function of ? for a few values of ? in the mean-field
approximation. The dots are the minima for the mean-field and the dotted curves are mean-field values for the
same ?. The mean-field leads to a very good approximation, and the optimal ? for the approximation is a good
estimator for the optimal ? in the simulation.
6
4
Filtering Smoother Processes
To study the filtering of smoother processes we will look at higher-order Markov processes. We do
so by considering a multidimensional stochastic process which is Markovian if we consider all of
the components, but restrict ourselves to one component, which will then exhibit a non-Markovian
structure. This is done by an extension to the Ornstein-Uhlenbeck process frequently used in Gaussian process literature, whose correlation structure is given by the Matern kernel (see below). We
have to work with the covariance matrix of the system, since its elements? dynamics are coupled.
Thus, Eq. 6 will be replaced by a matrix equation, to which we then apply the same treatment.
We consider a p-th order stochastic process such as ap+1 X (p) (t) + ap X (p?1) (t) + ? ? ? + a1 X(t) =
?Z(t), where Z(t) is white Gaussian noise with covariance ?(t ? t0 ) and X (n) (t) denotes the n-th
derivative of X(t). Writing the proper Ito stochastic differential equations we obtain a set of p ? 1
first order differential equations and a single first order stochastic differential equation,
X? 1 = X2 , X? 2 = X3 , . . . , X? p?1 = Xp ,
ap+1 dXp = ?
p
X
ai Xi dt + ?dWt ,
i=1
p+1?k
p
where Wt is the Wiener process. Choosing ak = k?1
?
which yields processes X1 (t) with
an autocorrelation function given by the Matern kernel
k(? ; ?, ?) = ?
? 2 2??
?
(?? ) K? (?? ) ,
??(? + 1/2)? ?
where ? + 21 = p, K? (x) is the modified Bessel function of the second kind and ? is the parameter
determining the characteristic time of the kernel. Note that the one-dimensional Ornstein-Uhlenbeck
process is a special case of this with p = 1, ? = 1/2. We can control the smoothness of the process
X1 (t) with the parameter ?, increasing it yields successively smoother processes (see supplementary
information).
We can express this as a multidimensional stochastic process by choosing ?i,j = ??i,j?1 + ?i,p aj
1/2
and ?i,j = ?i,p ?j,p ?, where ?i,j is the Kronecker delta. We then have the Ito stochastic differential
equation
~
~
~
dX(t)
= ??X(t)dt
+ ?1/2 dW
(11)
T
~
for X(t) = (X1 (t), X2 (t), . . . , Xp (t)). The covariance matrix then evolves according to (see [16,
p. 40])
d?
= ??? ? ??T + ?.
(12)
dt
This can be solved using the solution of the homogeneous equation ?(t) = exp[?t?] exp[?t?t ]
and the solution to the inhomogeous equation given by the equilibrium solution.
We assume that only the component X1 is observed, that is, the rate of observations only depends on
that component. We have then P (X1 , X2:p |obs) ? P (obs|X1 )P (X1 , X2:p ). Note that the precision
matrix (the inverse of the covariance matrix) will be updated simply by adding the likelihood term
1/?(t)2 to the first diagonal element. Using the block matrix inversion theorem we obtain the new
covariance matrix
?1,i ?1,j
0
.
(13)
?i,j
= ?i,j ? 2
? + ?1,1
Putting equations 12 and 13 together we obtain the differential Chapman-Kolmogorov equation for
the evolution of the probability of the covariance matrix. With this we obtain the differential equation
for the average posterior covariance matrix
X
d h?i,j i
?1,i ?1,j
+? 2 ?i,n ?j,n ,
= h?i+1,j i+h?i,j+1 i?
(?i,p al h?i,l i + ?j,p al h?j,l i)??(t)
dt
?(t)2 + ?1,1
l
(14)
where we abuse the notation by using that ?i,j = 0,if i > p or j > p. These can be solved in
the mean-field approximation to obtain an approximation for the covariance matrix. We also note
that one can derive a recursion scheme to express all of the elements as functions of the first row
dh? i
of covariances ?1,1:p . With these expressions we can then use the equilibrium conditions for dti,i
7
Figure 4: MMSE for a second-order stochastic process. On the left is the color map of the first diagonal
element of the covariance matrix for the ? = 3/2 case, corresponding to the variance of the observed stimulus
variable and on the right, the same element as a function of ? for a few values of ?. The overall dependence of
the error on ? and ? is strikingly similar to the OU process, with lower values of the MMSE, however. This is
due to the smoothness of the process, making it more predictable. In red we show the MMSE for the simulated
equilibrium variance for comparison. Though the mean-field approximation is not as good as in the OU case,
the relative error of it still falls below 18% throughout the range of parameters studied.
to solve for the equilibrium value of h?i,j i. We provide results for the case p = 2, ? = 3/2. The
equilibrium MMSE is shown in Fig. 4 on the left and in Fig. 4 on the right we show the dependence
on ? of the MMSE. The dependence of the error on the parameters resembles strongly that of the
Ornstein-Uhlenbeck process, showing a finite optimal value of ? which minimizes the error given
?. This becomes less pronounced as we go to very low firing rates. Note that for the second-order
process the MMSE relative to the variance of the observed process (MMSE/K(0)) drops to lower
values than in the Ornstein-Uhlenbeck process, leading to a better state estimation. We expect that
the error will become increasingly smaller for higher-order processes.
5
Discussion
We have shown that the dynamics of Bayesian state estimation error for Markovian processes can be
modelled by a simple dynamic system. This provides insight into generalization properties of Gaussian process inference in an online, causal setting, where previous generalization error calculations
[4, 5] for Gaussian processes do not apply. In the context of filtering the usual generalization error
calculations do not apply. Furthermore, we have demonstrated that a simple mean-field approximation succesfully captures the dynamics of the average error of the described inference framework.
This was shown in detail for the case of Ornstein-Uhlenbeck processes, and for a class of higherorder Markov processes.
One key feature we were able to verify is the existence of an optimal tuning width for Gaussian-tuned
Poisson processes which minimizes the MMSE, as has been verified elsewhere for static stimuli
([17, 12, 18]). This result is robust to the inclusion of coloured noise, as we have shown by modelling
a second order process.
Future research could concentrate in generalizing the presented framework towards more realistic
spike generation models, such as integrate-and-fire neurons. The generalization to broader classes
of stimuli would be of great interest as well. These results provide a promising first step towards a
mathematical theory of ecologically grounded sensory processing.
8
6
Acknowledgements
The work of Alex Susemihl was supported by the DFG Research Training Group GRK1589/1. The
work of Ron Meir was partially supported by grant No. 665/08 from the Israel Science Foundation.
References
[1] Tetsuya J. Kobayashi. Implementation of dynamic bayesian decision making by intracellular
kinetics. Phys. Rev. Lett., 104(22):228104, Jun 2010.
[2] Jean-Pascal Pfister, Peter Dayan, and Mate Lengyel. Know thy neighbour: A normative theory
of synaptic depression. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and
A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1464?1472.
2009.
[3] Omer Bobrowski, Ron Meir, Shy Shoham, and Yonina C. Eldar. A neural network implementing optimal state estimation based on dynamic spike train decoding. In Neural Information
Processing Systems, 2007.
[4] Dorthe Malzahn and Manfred Opper. A statistical physics approach for the analysis of machine
learning algorithms on real data. Journal of Statistical Mechanics: Theory and Experiment,
2005(11):P11001, 2005.
[5] P. Sollich and A. Halees. Learning curves for gaussian process regression: Approximations
and bounds. Neural Computation, 14(6):1393?1428, 2002.
[6] J Atick and A.N. Redlich. Could information theory provide an ecological theory of sensory
processing? Network: Computation in Neural Systems, 5:213?251, 1992.
[7] M.W. Pettet and C.D. Gilbert. Dynamic changes in receptive-field size in cat primary visual
cortex. Proceedings of the National Academy of Sciences, 89(17):8366?8370, 1992.
[8] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling maximizes information transmission. Neuron, 26(3):695?702, 2000.
[9] V. Dragoi, J. Sharma, and M. Sur. Adaptation-induced plasticity of orientation tuning in adult
visual cortex. Neuron, 28(1):287?298, 2000.
[10] I. Dean, B.L. Robinson, N.S. Harper, and D. McAlpine. Rapid neural adaptation to sound level
statistics. Journal of Neuroscience, 28(25):6430?6438, 2008.
[11] T. Hosoya, S.A. Baccus, and M. Meister. Dynamic predictive coding by the retina. Nature,
436(7047):71?77, 2005.
[12] Steve Yaeli and Ron Meir. Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons. Frontiers in Computational Neuroscience, 5(0):12, 2010.
[13] Quentin J. M. Huys, Richard S. Zemel, Rama Natarajan, and Peter Dayan. Fast population
coding. Neural Computation, 19(2):404?441, 2007.
[14] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. The MIT
Press, 55 Hayward Street, Cambridge, MA 02142, 2006.
[15] C.W. Gardiner. Stochastic Methods: A Handbook for the Natural and Social Sciences, volume 13 of Springer Serier in Synergetics. Springer, Berlin Heidelberg, fourth edition, 2009.
[16] Hannes Risken. The Fokker-Planck Equation: Methods of Solutions and Applications, volume 18 of Springer Series in Synergetics. Springer, Berlin Heidelberg, second ed. 1989. third
printing edition, 1996.
[17] M. Bethge, D. Rotermund, and K. Pawelzik. Optimal short-term population coding: When
fisher information fails. Neural Computation, 14(10):2317?2351, 2002.
[18] Philipp Berens, Alexander S. Ecker, Sebastian Gerwinn, Andreas S. Tolias, and Matthias
Bethge. Reassessing optimal neural population codes with neurometric functions. Proceedings
of the National Academy of Sciences, 108(11):4423?4428, 2011.
9
| 4471 |@word h:1 inversion:1 simulation:4 seek:1 simplifying:1 covariance:10 celebrated:1 series:2 tuned:2 denoting:1 mmse:17 past:2 current:1 dx:2 written:1 realistic:1 numerical:2 plasticity:1 analytic:2 discernible:1 plot:1 drop:1 stationary:5 short:1 manfred:2 characterization:1 provides:2 contribute:1 ron:4 complication:1 philipp:1 mathematical:1 differential:15 become:4 doubly:2 synergetics:2 autocorrelation:1 introduce:1 thy:1 expected:5 rapid:1 frequently:1 mechanic:1 little:1 pawelzik:1 considering:1 increasing:3 becomes:3 provided:1 estimating:1 notation:1 tetsuya:1 maximizes:1 hayward:1 israel:1 kind:2 minimizes:2 sharpening:1 temporal:1 dti:1 multidimensional:2 ti:10 exactly:1 universit:2 control:2 grant:1 planck:2 before:2 negligible:1 engineering:1 kobayashi:1 treat:1 limit:2 bccn:1 encoding:2 ak:1 solely:2 fluctuation:2 approximately:1 firing:2 black:2 ap:3 abuse:1 reassessing:1 studied:1 resembles:1 dynamically:2 succesfully:1 range:6 huys:1 steveninck:1 block:1 x3:1 procedure:1 area:1 evolving:1 shoham:1 confidence:1 get:1 close:2 context:2 writing:2 accumulating:1 gilbert:1 optimize:2 equivalent:2 deterministic:2 map:2 center:5 dz:2 demonstrated:1 go:1 attention:1 williams:2 dean:1 simplicity:1 insight:2 estimator:3 rule:1 spanned:1 dw:2 quentin:1 population:9 classic:1 updated:2 enhanced:1 pt:2 exact:11 homogeneous:1 agreement:1 element:5 approximated:1 natarajan:1 observed:6 bottom:2 solved:7 capture:2 thousand:1 region:1 culotta:1 intuition:1 environment:3 transforming:1 predictable:1 dynamic:15 depend:1 predictive:1 strikingly:1 cat:1 kolmogorov:4 train:2 fast:1 describe:1 zemel:1 outside:1 choosing:2 quite:1 whose:1 larger:1 plausible:1 solve:2 supplementary:1 jean:1 encoder:1 statistic:3 noisy:3 inhomogeneity:1 online:2 dxp:1 analytical:3 matthias:1 linearising:1 product:1 maximal:1 adaptation:2 tu:1 omer:1 opperm:1 academy:2 pronounced:1 transmission:1 dsf:1 converges:1 object:1 rama:1 derive:5 depending:1 ac:1 ij:1 received:1 eq:10 c:1 come:1 concentrate:3 discontinuous:1 filter:1 stochastic:18 viewing:1 implementing:1 explains:1 hx:1 behaviour:2 generalization:4 opt:4 biological:6 probable:1 singularity:1 mathematically:1 extension:2 kinetics:1 frontier:1 hold:2 pettet:1 around:1 exp:4 great:4 equilibrium:10 purpose:1 estimation:4 tf:2 mit:1 sensor:5 gaussian:24 always:2 modified:2 broader:1 derived:2 focus:1 modelling:1 likelihood:3 greatly:1 inference:4 dependent:2 dayan:2 expand:1 overall:1 orientation:1 pascal:1 eldar:1 smoothing:1 integration:2 special:1 marginal:1 field:16 chapman:4 yonina:1 represents:1 look:2 future:2 minimized:1 stimulus:11 richard:1 few:3 retina:1 neighbour:1 national:2 individual:1 dfg:1 replaced:1 ourselves:1 fire:1 maintain:1 overlayed:1 interest:3 arrives:1 yielding:1 tj:2 integral:1 partial:2 taylor:1 haifa:1 causal:1 theoretical:1 minimal:1 modeling:1 markovian:6 cover:1 ordinary:1 tractability:1 cost:1 technische:2 technion:2 delay:1 too:1 density:3 physic:1 decoding:1 together:1 continuously:1 bethge:2 squared:5 again:4 reflect:1 successively:1 derivative:2 leading:2 rescaling:2 account:8 de:3 coding:10 explicitly:1 ornstein:9 depends:4 matern:2 closed:2 red:3 start:1 bayes:1 hf:1 complicated:1 ass:1 il:1 minimize:2 square:1 wiener:1 variance:16 characteristic:1 spaced:1 yield:5 modelled:1 bayesian:12 ecologically:1 finalize:1 lengyel:1 history:1 phys:1 sebastian:1 synaptic:1 ed:1 infinitesimal:1 involved:1 associated:1 mi:5 static:1 treatment:2 color:1 obtainable:1 ou:2 back:1 steve:1 higher:3 dt:19 hannes:1 done:2 though:2 strongly:1 furthermore:1 implicit:2 atick:1 correlation:2 d:1 hand:1 nonlinear:2 defines:1 aj:1 effect:1 requiring:1 verify:1 functioning:1 evolution:12 analytically:2 hence:2 iteratively:1 deal:1 white:1 indistinguishable:1 width:4 instantaneous:1 recently:1 mcalpine:1 spiking:3 volume:2 extend:1 discussed:1 slight:1 numerically:1 cambridge:1 ai:1 smoothness:3 tuning:10 trivially:1 inclusion:1 had:1 dot:1 cortex:2 posterior:15 driven:2 reverse:1 certain:1 ecological:3 gerwinn:1 seen:4 minimum:1 sharma:1 bessel:1 signal:1 hsi:3 smoother:3 full:1 sound:2 adapt:1 calculation:3 long:1 equally:1 a1:1 prediction:1 regression:2 expectation:1 poisson:4 histogram:3 represent:1 uhlenbeck:9 kernel:6 grounded:1 cell:1 remarkably:2 spacing:1 interval:8 ode:1 ecker:1 subject:1 induced:1 virtually:1 lafferty:1 ee:1 noting:1 bernstein:2 bengio:1 concerned:1 independence:1 restrict:1 andreas:1 regarding:1 t0:9 expression:4 utility:1 peter:2 hardly:1 depression:1 useful:2 se:1 amount:1 transforms:1 repeating:1 extensively:1 specifies:1 meir:4 s3:1 dotted:1 neuroscience:6 delta:1 rmeir:1 blue:2 discrete:1 express:2 group:1 putting:1 four:1 key:1 falling:1 queueing:1 neither:1 verified:2 diffusion:1 relaxation:2 run:2 inverse:1 yaeli:1 fourth:1 almost:1 throughout:2 decision:3 ob:3 scaling:1 rotermund:1 bound:1 simplification:1 risken:1 gardiner:2 kronecker:1 alex:3 x2:4 generates:1 argument:3 department:1 according:1 smaller:5 sollich:1 increasingly:2 evolves:1 biologically:1 making:4 rev:1 equation:29 know:1 tractable:1 meister:1 apply:3 observe:1 dwt:1 primer:1 existence:1 original:1 substitute:1 assumes:1 denotes:2 top:3 giving:1 spike:7 receptive:1 primary:1 dependence:5 usual:3 diagonal:2 bialek:1 exhibit:1 excels:1 higherorder:1 berlin:8 simulated:4 decoder:1 street:1 argue:1 trivial:1 dragoi:1 neurometric:1 kalman:1 code:2 sur:1 providing:3 demonstration:1 minimizing:1 setup:1 baccus:1 cij:3 statement:1 implementation:1 proper:1 allowing:1 observation:29 neuron:8 markov:5 finite:1 mate:1 situation:1 precise:1 interacting:1 drift:3 meansquared:1 robinson:1 malzahn:1 able:2 suggested:1 adult:1 below:2 regime:3 reliable:1 natural:1 recursion:1 scheme:2 brief:1 jun:1 auto:1 coupled:1 text:1 coloured:1 literature:1 acknowledgement:1 determining:1 asymptotic:1 relative:5 loss:1 expect:1 interesting:2 generation:1 filtering:12 var:1 shy:1 foundation:1 integrate:1 xp:2 editor:1 row:1 elsewhere:1 supported:2 rasmussen:1 side:1 allow:1 fall:1 van:1 curve:2 opper:2 calculated:1 evaluating:1 transition:2 valid:2 lett:1 sensory:9 concretely:1 made:1 jump:7 adaptive:1 far:2 social:1 approximate:2 handbook:1 knew:1 xi:1 tolias:1 continuous:2 promising:1 nature:1 robust:1 ruyter:1 obtaining:1 schuurmans:1 heidelberg:2 mse:1 excellent:1 complex:1 broadening:1 berens:1 linearly:1 intracellular:1 s2:2 whole:1 noise:2 edition:2 peq:1 quadrature:1 x1:7 fig:8 referred:1 redlich:1 precision:1 fails:1 third:1 printing:1 ito:2 z0:4 theorem:1 hosoya:1 specific:2 showing:1 normative:1 evidence:1 essential:1 intractable:2 adding:1 magnitude:1 sofar:1 mf:4 specialised:1 generalizing:1 simply:3 likely:1 visual:2 partially:1 halees:1 springer:4 fokker:2 constantly:1 dh:1 ma:1 identity:1 towards:2 brenner:1 fisher:1 analysing:1 change:2 susemihl:3 averaging:2 wt:1 total:3 called:1 pfister:1 bobrowski:1 diverging:1 harper:1 alexander:1 ongoing:1 evaluate:1 phenomenon:1 |
3,835 | 4,472 | TD? : Re-evaluating Complex Backups in Temporal
Difference Learning
George Konidaris??
MIT CSAIL?
Cambridge MA 02139
[email protected]
Scott Niekum??
Philip S. Thomas??
University of Massachusetts Amherst?
Amherst MA 01003
{sniekum,pthomas}@cs.umass.edu
Abstract
We show that the ?-return target used in the TD(?) family of algorithms is the
maximum likelihood estimator for a specific model of how the variance of an nstep return estimate increases with n. We introduce the ?-return estimator, an
alternative target based on a more accurate model of variance, which defines the
TD? family of complex-backup temporal difference learning algorithms. We derive TD? , the ?-return equivalent of the original TD(?) algorithm, which eliminates the ? parameter but can only perform updates at the end of an episode and
requires time and space proportional to the episode length. We then derive a second algorithm, TD? (C), with a capacity parameter C. TD? (C) requires C times
more time and memory than TD(?) and is incremental and online. We show that
TD? outperforms TD(?) for any setting of ? on 4 out of 5 benchmark domains,
and that TD? (C) performs as well as or better than TD? for intermediate settings
of C.
1
Introduction
Most reinforcement learning [1] algorithms are value-function based?learning is performed by estimating the expected return (discounted sum of rewards) obtained by following the current policy
from each state, and then updating the policy based on the resulting so-called value function. Efficient value function learning algorithms are crucial to this process and have been the focus of a great
deal of reinforcement learning research.
The most successful and widely-used family of value function algorithms is the TD(?) family [2].
The TD(?) family forms an estimate of return, called the ?-return, that blends both low variance,
bootstrapped and biased temporal-difference estimates of return with high variance, unbiased Monte
Carlo estimates of return using a parameter ?. While several different algorithms exist within the
TD(?) family?the original incremental and online algorithm [2], replacing traces [3], least-squares
algorithms [4], algorithms for learning state-action value functions [1, 5], and algorithms for adapting ? [6], among others?the ?-return formulation has remained unchanged since its introduction in
1988 [2]. Our goal is to understand the modeling assumptions implicit in the ?-return formulation
and improve them.
We show that the ?-return estimator is only a maximum-likelihood estimator of return given a specific model of how the variance of an n-step return estimate increases with n. We propose a more
accurate model of that variance increase and use it to obtain an expression for a new return estimator,
the ?-return. This results in the TD? family of algorithms, of which we derive TD? , the ?-return
version of the original TD(?) algorithm. TD? is only suitable for the batch setting where updates
occur at the end of the episode and requires time and space proportional to the length of the episode,
?
All three authors are primary authors on this occasion.
1
but it eliminates the ? parameter. We then derive a second algorithm, TD? (C), which requires C
times more time and memory than TD(?) and can be used in an incremental and online setting. We
show that TD? outperforms TD(?) for any setting of ? on 4 out of 5 benchmark domains, and that
TD? (C) performs as well as or better than TD? for intermediate settings of C.
2
Complex Backups
Estimates of return lie at the heart of value-function based reinforcement learning algorithms: a
value function V ? (which we denote here as V , assuming a fixed policy) estimates return from each
state, and the learning process aims to reduce the error between estimated and observed returns.
Temporal difference (TD) algorithms use a return estimate obtained by taking a single transition in
the MDP and then estimating the remaining return using the value function itself:
RsTD
= rt + ?V (st+1 ),
t
(1)
where RsTD
is the return estimate from state st and rt is the reward for going from st to st+1 via
t
action at . Monte Carlo algorithms (for episodic tasks) do not use intermediate estimates but instead
use the full return sample directly:
L?1
X
? i rt+i ,
(2)
RsMC
=
t
i=0
for an episode L transitions in length after time t. These two types of return estimates can be
considered instances of the more general notion of an n-step return sample, for n ? 1:
Rs(n)
= rt + ?rt+1 + ? 2 rt+2 + . . . + ? n?1 rt+n?1 + ? n V (st+n ).
t
(3)
Here, n transitions are observed from the MDP and the remaining portion of return is estimated using
the value function. The important observation here is that all n-step return estimates can be used
simultaneously for learning. The TD(?) family of algorithms accomplishes this using a parameter
? ? [0, 1] to average n-step return estimates, according to the following equation:
Rs?t
= (1 ? ?)
?
X
?n Rs(n+1)
.
t
(4)
n=0
Note that for any episodic MDP we always obtain a finite episode length. The usual formulation of
an episodic MDP uses absorbing terminal states?states where only zero-reward self-transitions are
available. In such cases the n-step returns past the end of the episode are all equal, and therefore
TD(?) allocates the weights corresponding to all of those return estimates to the final transition.
Rs?t , known as the ?-return, is an estimator that blends one-step temporal difference estimates (which
are biased, but low variance) at ? = 0 and Monte Carlo estimates (which are unbiased, but high
variance) at ? = 1. This forms the target for the entire family of TD(?) algorithms, whose members
differ largely in their use of the resulting estimates to update the value function.
What makes this a good way to average the n-step returns? Why choose this method over any
other? Viewing this as a statistical estimation problem where each n-step return is a sample of the
true return, under what conditions and for what model is equation (4) a good estimator for return?
The most salient feature of the n-step returns is that their variances increase with n. Therefore, con(n)
sider the following model: each n-step return estimate Rst is sampled from a Gaussian distribution
1
centered on the true return, Rst , with variance k(n) that is some increasing function of n. If we
? s is:
assume the n-step returns are independent,2 then the likelihood function for return estimate R
t
? s |Rs(1) , ..., Rs(n) ; k) =
L(R
t
t
t
L
Y
? s , k(n)).
N (Rs(n)
|R
t
t
n=1
1
2
We should note that this assumption is not quite true: only the Monte Carlo return is unbiased.
Again, this assumption is not true. However, it allows us to obtain a simple, closed-form estimator.
2
(5)
?s :
Maximizing the log of this equation obtains the maximum likelihood estimator for R
t
PL
?1 (n)
Rst
n=1 k(n)
?s = P
R
.
t
L
?1
n=1 k(n)
(6)
Thus, we obtain a weighted sum: each n-step return is weighted by the inverse of its variance and
the entire sum is normalized so that the weights sum to 1. From here we can see that if we let L go
?n
to infinity
P? and set k(n) = ? , 0 ? ? ? 1, then we obtain the ?-return estimator in equation (4),
since n=0 ?n = 1/(1 ? ?).
Thus, ?-return (as used in the TD(?) family of algorithms) is the maximum-likelihood estimator of
return under the following assumptions:
1. The n-step returns from a given state are independent.
2. The n-step returns from a given state are normally distributed with a mean of the true return.
3. The variances of the n-step returns from each state increase according to a geometric progression in n, with common ratio ??1 .
All of these assumptions could be improved, but the third is the most interesting. In this view, the
variance of an n-step sample return increases geometrically with n and ? parametrizes the shape of
this geometric increase.
3
?-Return and the TD? Family of Algorithms
Consider the variance of an n-step sample return, n > 1:
h
i
h
i
V ar Rs(n)
=V ar Rs(n?1)
? ? n?1 V (st+n?1 ) + ? n?1 rt+n?1 + ? n V (st+n )
t
t
h
i
h
i
2(n?1)
=V ar Rs(n?1)
+
?
V
ar
V
(s
)
?
(r
+
?V
(s
))
t+n?1
t+n?1
t+n
t
h
i
+ 2Cov Rs(n?1)
, ?? n?1 V (st+n?1 ) + ? n?1 rt+n?1 + ? n V (st+n ) .
t
Examining the last term, we see that:
h
i
n?1
n?1
n
Cov Rs(n?1)
,
??
V
(s
)
+
?
r
+
?
V
(s
)
t+n?1
t+n?1
t+n
t
h
i
=Cov Rs(n?1)
, Rs(n)
? Rs(n?1)
t
t
t
h
i
h
i
(n)
(n?1)
(n?1)
,
R
?
Cov
R
,
R
=Cov Rs(n?1)
st
st
st
t
h
i
h
i
(n?1)
(n)
(n?1)
=Cov Rst
, Rst ? V ar Rst
.
(n?1)
(7)
(8)
(9)
(10)
(11)
(12)
(n)
Since Rst
and Rst are highly correlated?being two successive return samples?we assume that
(n?1)
(n)
(n?1)
(n)
(n?1)
Cov[Rst
, Rst ] ? V ar[Rst
] (equality holds when Rst and Rst
are perfectly correlated).
Thus, equation (12) is approximately zero. Hence, equation (8) becomes:
h
i
h
i
h
i
(n?1)
2(n?1)
V ar Rs(n)
?
V
ar
R
+
?
V
ar
V
(s
)
?
(r
+
?V
(s
))
.
(13)
t+n?1
t+n?1
t+n
st
t
Notice that the final term on the right hand side of equation (13) is the discounted variance of the
temporal difference error n-steps after the current state. We assume that this variance is roughly the
same for all states; let that value be ?. Since ? also approximates the variance of the 1-step return
(i.e., k(1) = ?), we obtain the following simple model of the variance of an n-step sample of return:
k(n) =
n
X
i=1
3
? 2(i?1) ?.
(14)
Substituting equation (14) into the general return estimator in equation (6), we obtain:
Rs?t
=
Pn
2(i?1) ?1 (n)
) Rst
n=1 (
i=1 ?
P
P
L
n
2(i?1)
?1
)?1
?
n=1 (
i=1 ?
??1
PL
=
L
X
w(n, L)Rs(n)
,
t
(15)
n=1
where
Pn
? 2(i?1) )?1
Pn
2(i?1) )?1
n=1 (
i=1 ?
(
w(n, L) = PL
i=1
(16)
is the weight associated with the nth-step return in a trajectory of length L after time t. This estimator
has the virtue of being parameter-free since the ? values cancel. Therefore, we need not estimate ??
under the assumption of independent, Gaussian n-step returns with variances increasing according
to equation (13), equation (15) is the maximum likelihood estimator for any value of ?. We call this
estimator the ?-return since it weights the n-step returns according to the discount factor.
Figure 1 compares the weightings obtained using ?-return and ?-return for a few example trajectory
lengths. There are two important qualitative differences. First, the ?-return weightings spike at the
end of the trajectory, whereas the ?-return weightings do not. This occurs because even though any
sample trajectory has finite length, the ?-return as defined in equation (4) is actually an infinite sum;
the remainder of the weight mass is allocated to the Monte Carlo return. This allows the normalizing
factor in equation (4) to be a constant, rather than having it depend on the length of the trajectory, as
it does in equation (15) for the ?-return. This significantly complicates the problem of obtaining an
incremental algorithm using ?-return, as we shall see in later sections.
0.25
0.35
Lambda=0.75, L=10
Lambda=0.85, L=20
Lambda=0.8, L=30
0.2
L=10, Gamma=0.95
L=20, Gamma=0.95
L=30, Gamma=0.95
L=30, Gamma=0.8
0.3
Weight
Weight
0.25
0.15
0.1
0.2
0.15
0.1
0.05
0.05
0
0
5
10
15
20
25
0
30
Return Estimate Length
0
5
10
15
20
25
30
Return Estimate Length
Figure 1: Example weights for trajectories of various lengths for ?-return (left) and ?-return (right).
Second, while the ?-return weightings tend to zero over time, the ?-return weightings tend toward
a constant. This means that very long trajectories will be effectively ?cut-off? after some point and
have effectively no contribution to the ?-return, whereas after a certain length in the ?-return all
n-step returns have roughly equal weighting. This also complicates the problem of obtaining an
incremental algorithm using ?-return: even if we were to assume infinitely many n-step returns past
the end of the episode, the normalizing constant would not become finite.
Nevertheless, we can use the ?-return estimator to obtain an entire family of TD? learning algorithms; for any TD(?) algorithm we can derive an equivalent TD? algorithm. In the following
section, we derive TD? , the ?-return equivalent of the original TD(?) algorithm.
4
TD?
Given a set of m trajectories T = {?1 , ?2 , . . . , ?m }, where l? = |? | denotes the number of
(s?t , rt? , s?t+1 ) tuples in the trajectory ? . Using approximator V?? with parameters ? to approximate
4
V , the objective function for regression is:
l? ?1
2
1XX
E(?) =
Rs??t ? V?? (s?t )
2
t=0
(17)
? ?T
l? ?1
1XX
=
2
t=0
? ?T
Because
Pl? ?t
n=1
lX
? ?t
!2
w(n, l? ?
(n)
t)Rs?t
? V?? (s?t )
.
w(n, l? ?
t)V?? (s?t )
(18)
n=1
w(n, l? ? t) = 1, we can write
l? ?1
1XX
E(?) =
2
t=0
lX
? ?t
? ?T
n=1
l? ?1
1XX
=
2
t=0
lX
? ?t
? ?T
w(n, l? ?
(n)
t)Rs?t
?
lX
? ?t
!2
(19)
n=1
w(n, l? ? t)
h
(n)
Rs?t
i
? V?? (s?t )
!2
.
(20)
n=1
Our goal is to minimize E(?). One approach is to descend the gradient ?? E(?), assuming that the
(n)
Rs?t are noisy samples of V (s?t ) and not a function of ?, as in the derivation of TD(?) [7]:
?? = ???? E(?) = ??
? ?1 lX
? ?t
X lX
h
i
(n)
?w(n, l? ? t) Rs?t ? V?? (s?t ) ?? V?? (s?t ),
(21)
? ?T t=0 n=1
where ? is a learning rate. We can substitute in n = u ? t (where u is the current time step, st is the
state we want to update the value estimate of, and n is the length of the n-step return that ends at the
current time step) to get:
?? = ??
l?
? ?1 X
X lX
h
i
(u?t)
?w(u ? t, l? ? t) Rs?t
? V?? (s?t ) ?? V?? (s?t ).
(22)
? ?T t=0 u=t+1
Swapping the sums allows us to derive the backward version of TD? :
?? = ??
l? u?1
X
XX
h
i
(u?t)
?w(u ? t, l? ? t) Rs?t
? V?? (s?t ) ?? V?? (s?t ).
(23)
? ?T u=1 t=0
Expanding and rearranging the terms gives us an algorithm for TD? when using linear function
approximation with weights ?:
"
!
#
l? u?1
u?1
XX
X
X
i?t ?
u?t
?? = ??
w(u ? t, l? ? t) ?
? rsi ? ?
(? ? ?s?u ) + (? ? ?s?t ) ?s?t
? ?T u=1 t=0
i=t
(24)
= ??
X
l? u?1
X
X
"
w(u ? t, l? ? t) ? ? (?s?t ? ? u?t ?s?u ) ?
? ?T u=1 t=0
= ??
l? u?1
XX
X
u?1
X
!#
? i?t rs?i
?s?t
(25)
i=t
w(u ? t, l? ? t) [? ? a ? b] ?s?t ,
(26)
? ?T u=1 t=0
where ?s?t is the feature vector at state s?t , a = ?s?t ? ? u?t ?s?u , and b =
u?1
P
i=t
? i?t rs?i . This leads to
TD? for episodic tasks (given in Algorithm 1), which eliminates the eligibility trace parameter ?. For
episode length l? and feature vector size F , the algorithm can be implemented with time complexity
of O(l? F ) per step and space complexity O(l? F ). Unfortunately, implementing this backward TD?
incrementally is problematic: l? is not known until the end of the trajectory is reached, and without
5
Algorithm 1 TD?
Given: A discount factor, ?
?
1: ? ? 0
2: for each trajectory ? ? T do
3:
Store ?0 in memory
4:
for u = 1 to l? do
5:
Store ?u and ru?1 in memory
?
6:
7:
8:
9:
?? 0
for t = 0 to u ? 1 do
a ? ?t ? ? u?t ?u
u?1
P i?t
? ri
b?
i=t
10:
? ? ? + w(u ? t, l? ? t)[? ? a ? b]?t
11:
end for
12:
? ? ? ? ??
13:
end for
14:
Discard all ? and r from memory
15: end for
it, the normalizing constant of the weights used in the updates cannot be computed. Thus, Algorithm
1 can only be used for batch updates where each episode?s trajectory data is stored until an update
is performed at the end of an episode; this is often undesirable, and in continuing tasks, impossible.
TD(?) is able to achieve O(F ) time and space for two reasons. First, the weight normalization is
a constant and does not depend on the length of the episode. Second, the operation that must be
performed on each trace is the same?a multiplication by ?. Thus, TD(?) need only store the sum
of the return estimates from each state, rather than having to store each individually.
One approach to deriving an incremental algorithm is to use only the first C n-step returns from
each state, similar to truncated temporal differences [8]. This eliminates the first barrier: weight
normalization no longer relies on the episode length, except for the final C ? 1 states, which can
be corrected for after the episode ends. This approach has time complexity O(CF ) and space
complexity O(CF )?and is therefore C times more expensive than TD(?)?and replaces ? with
the more intuitive parameter C rather than eliminating it, but it affords the incremental TD? (C)
algorithm given in Algorithm 2. Note that setting C = 1 obtains TD(0), and C ? l? obtains TD? .
5
Empirical Comparisons
Figure 2 shows empirical comparisons of TD(?) (for various values of ?), TD? and TD? (C) for 5
common benchmarks. The first is a 10 ? 10 discrete gridworld with the goal in the bottom right,
deterministic movement in the four cardinal directions, a terminal reward of +10 and ?0.5 for all
other transitions, and ? = 1.0. For the gridworld, the agent selected one of the optimal actions with
probability 0.4, and each of the other actions with probability 0.2. The second domain is the 5-state
discrete chain domain [1] with random transitions to the left and right, and ? = 0.95. The third
domain is the pendulum swing-up task [7] with a reward of 1.0 for entering a terminal state (the
pendulum is nearly vertical) and zero elsewhere, and ? = 0.95. The optimal action was selected
with probability 0.5, with a random action selected otherwise. The fourth domain is mountain car
[1] with ? = 0.99, and using actions from a hand-coded policy with probability 0.75, and random
actions otherwise. The fifth and final domain is the acrobot [1] with a terminal reward of 10 and
?0.1 elsewhere. A random policy was used with ? = 0.95. In all cases the start state was selected
uniformly from the set of nonterminal states. A 5th order Fourier basis [9] was used as the function
approximator for the 3 continuous domains. We used 10, 5, 10, 3, and 10 trajectories, respectively.
TD? outperforms TD(?) for all settings of ? in 4 out of the 5 domains. In the chain domain TD?
performs better than most settings of ? but slightly worse than the optimal setting. An interesting and
somewhat unexpected result is that TD? (C) with a relatively low setting of C does at least as well
as?or in some cases better than?TD? . This could occur because the n-step returns become very
6
Algorithm 2 TD? (C)
Given: A discount factor, ?
A cut-off length, C
?
1: ? ? 0
2: for each trajectory ? ? T do
3:
Store ?0 in memory
4:
for u = 1 to l? do
5:
If u > C, discard ?u?C?1 , ? u?C?1 , and r u?C?1 from memory
6:
? u?1 ? ?
7:
Store ?u , ? u?1 , and ru?1 in memory
?
8:
9:
10:
11:
12:
?? 0
m = max(0, u ? C)
for t = m to u ? 1 do
a ? ?t ? ? u?t ?u
u?1
P i?t
b?
? ri
i=t
13:
14:
15:
16:
? ? ? + w(u ? t, C)[? ? a ? b]?t
end for
? ? ? ? ??
end for
17:
18:
19:
m = min(l? , C)
? ? ? l? ?m
for u
? = l? ? m to l? do
?
?? 0
for t = m to u
? ? 1 do
a ? ?t ? ? u??t ?u?
u
?P
?1
b?
? i?t ri
20:
21:
22:
23:
i=t
24:
? ? ? + w(?
u ? t, m ? t)[? ? a ? b]?t
25:
end for
26:
? ? ? ? ??
27:
Discard ?u? , ? u??1 , and r u??1 from memory
28:
end for
29: end for
similar for large n due to either ? discounting diminishing the difference, or to the additional onestep rewards accounting for a very small fraction of the total return. These near-identical estimates
will accumulate a large fraction of the weighting (see Figure 1) and come to dominate the ?-return
estimate. This suggests that once the returns start to become almost identical they should not be
considered independent samples and should instead be discarded.
6
Discussion and Future Work
An immediate goal of future work is finding an automatic way to set C. We may be able to calculate
bounds on the diminishing differences between n-step returns due to ?, or empirically track the
point at which those differences begin to diminish. Another avenue for future research is deriving a
version of TD? or TD? (C) that provably converges for off-policy data with function approximation,
most likely using recent insights on gradient-descent based TD algorithms [10]. Thereafter, we aim
to develop an algorithm based on ?-return for control rather than just prediction, for example Sarsa? .
We have shown that the widely used ?-return formulation is the maximum-likelihood estimator of
return given three assumptions (see section 2). The results presented here have shown that reevaluating just one of these assumptions results in more accurate value function approximation algorithms.
We expect that re-evaluating all three will prove a fruitful avenue for future research.
7
?=
=0
?=0.0
05
?=0..1
?=0.15
?=0..2
?=0.2
25
?=0..3
?=0.3
35
?=0..4
?=0.4
45
?=0..5
?=0.5
55
?=0..6
?=0.6
65
?=0..7
?=0.7
75
?=0..8
?=0.8
85
?=0..9
?=0.9
95
?=0.9
99
?=
=1
?
?(2
2)
?(5
5)
?(10
0)
?(20
0)
?(50
0)
?(100
0)
?(200
0)
?(500
0)
?(1000
0)
?=
=0
?=0.0
05
?=0..1
?=0.15
?=0..2
?=0.2
25
?=0..3
?=0.3
35
?=0..4
?=0.4
45
?=0..5
?=0.5
55
?=0..6
?=0.6
65
?=0..7
?=0.7
75
?=0..8
?=0.8
85
?=0..9
?=0.9
95
?=0.9
99
?=
=1
?
?(2
2)
?(5
5)
?(10
0)
?(20
0)
?(50
0)
?(100
0)
?(200
0)
?(500
0)
?(1000
0)
MSE
E
1000
900
800
700
600
MSE
E
?=
=0
?=0.0
05
?=0..1
?=0.15
?=0..2
?=0.2
25
?=0..3
?=0.3
35
?=0..4
?=0.4
45
?=0..5
?=0.5
55
?=0..6
?=0.6
65
?=0..7
?=0.7
75
?=0..8
?=0.8
85
?=0..9
?=0.9
95
?=0.9
99
?=
=1
?
?(2
2)
?(5
5)
?(10
0)
?(20
0)
?(50
0)
?(100
0)
?(200
0)
?(500
0)
?(1000
0)
MSE
E
?=
=0
?=0.0
05
?=0..1
?=0.15
?=0..2
?=0.2
25
?=0..3
?=0.3
35
?=0..4
?=0.4
45
?=0..5
?=0.5
55
?=0..6
?=0.6
65
?=0..7
?=0.7
75
?=0..8
?=0.8
85
?=0..9
?=0.9
95
?=0.9
99
?=
=1
?
?(2
2)
?(5
5)
?(10
0)
?(20
0)
?(50
0)
?(100
0)
?(200
0)
?(500
0)
?(1000
0)
MSE
E
?=
=0
?=0.0
05
?=0..1
?=0.15
?=0..2
?=0.2
25
?=0..3
?=0.3
35
?=0..4
?=0.4
45
?=0..5
?=0.5
55
?=0..6
?=0.6
65
?=0..7
?=0.7
75
?=0..8
?=0.8
85
?=0..9
?=0.9
95
?=0.9
99
?=
=1
?
?(2
2)
?(5
5)
?(10
0)
?(20
0)
?(50
0)
?(100
0)
?(200
0)
?(500
0)
?(1000
0)
MSE
E
340
Gridworld
300
260
220
0.1
0.09
0 08
0.08
0.07
0.06
0.05
5710
5690
5670
5650
5630
Chain
0.8
Pendulum
0.6
0.4
0.2
Mountain Car
Acrobot
Figure 2: Mean squared error (MSE) over sample trajectories from five benchmark domains for
TD(?) with various settings of ?, TD? , and TD? (C), for various settings of C. Error bars are
standard error over 100 samples. Each result is the minimum MSE (weighted by state visitation
frequency) between each algorithm?s approximation and the correct value function (obtained using
a very large number of Monte Carlo samples), found by searching over ? at increments of 0.0001.
Acknowledgments
We would like to thank David Silver, Hamid Maei, Gustavo Goretkin, Sridhar Mahadevan and Andy
Barto for useful discussions. George Konidaris was supported in part by the AFOSR under grant
AOARD-104135 and the Singapore Ministry of Education under a grant to the Singapore-MIT International Design Center. Scott Niekum was supported by the AFOSR under grant FA9550-08-10418.
8
References
[1] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge,
MA, 1998.
[2] R.S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning,
3(1):9?44, 1988.
[3] S. Singh and R.S. Sutton. Reinforcement learning with replacing eligibility traces. Machine
Learning, 22:123?158, 1996.
[4] J.A. Boyan. Least squares temporal difference learning. In Proceedings of the 16th International Conference on Machine Learning, pages 49?56, 1999.
[5] H.R. Maei and R.S. Sutton. GQ(?): A general gradient algorithm for temporal-difference
prediction learning with eligibility traces. In Proceedings of the Third Conference on Artificial
General Intelligence, 2010.
[6] C. Downey and S. Sanner. Temporal difference Bayesian model averaging: A Bayesian perspective on adapting lambda. In Proceedings of the 27th International Conference on Machine
Learning, pages 311?318, 2010.
[7] K. Doya. Reinforcement learning in continuous time and space.
12(1):219?245, 2000.
Neural Computation,
[8] P. Cichosz. Truncating temporal differences: On the efficient implementation of TD(?) for
reinforcement learning. Journal of Artificial Intelligence Research, 2:287?318, 1995.
[9] G.D. Konidaris, S. Osentoski, and P.S. Thomas. Value function approximation in reinforcement
learning using the Fourier basis. In Proceedings of the Twenty-Fifth Conference on Artificial
Intelligence, pages 380?385, 2011.
[10] R.S. Sutton, H.R. Maei, D. Precup, S. Bhatnagar, D. Silver, Cs. Szepesvari, and E. Wiewiora.
Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th International Conference on Machine Learning, 2009.
9
| 4472 |@word version:3 eliminating:1 r:29 accounting:1 uma:1 bootstrapped:1 outperforms:3 past:2 current:4 must:1 wiewiora:1 shape:1 update:7 intelligence:3 selected:4 fa9550:1 successive:1 lx:7 five:1 become:3 qualitative:1 prove:1 introduce:1 expected:1 roughly:2 terminal:4 discounted:2 td:65 increasing:2 becomes:1 begin:1 estimating:2 xx:7 mass:1 what:3 mountain:2 finding:1 temporal:13 control:1 normally:1 grant:3 sutton:5 approximately:1 suggests:1 acknowledgment:1 episodic:4 empirical:2 adapting:2 significantly:1 sider:1 get:1 cannot:1 undesirable:1 impossible:1 equivalent:3 deterministic:1 fruitful:1 center:1 maximizing:1 go:1 truncating:1 estimator:17 insight:1 aoard:1 deriving:2 dominate:1 searching:1 notion:1 increment:1 target:3 us:1 osentoski:1 expensive:1 updating:1 cut:2 observed:2 bottom:1 descend:1 calculate:1 episode:14 movement:1 complexity:4 reward:7 depend:2 singh:1 basis:2 various:4 derivation:1 fast:1 monte:6 artificial:3 niekum:2 whose:1 quite:1 widely:2 otherwise:2 cov:7 itself:1 noisy:1 final:4 online:3 propose:1 gq:1 remainder:1 achieve:1 intuitive:1 rst:14 incremental:7 converges:1 silver:2 derive:7 develop:1 nonterminal:1 implemented:1 c:2 come:1 differ:1 direction:1 correct:1 centered:1 viewing:1 implementing:1 education:1 hamid:1 sarsa:1 pl:4 hold:1 considered:2 diminish:1 great:1 predict:1 substituting:1 estimation:1 individually:1 weighted:3 mit:4 always:1 gaussian:2 aim:2 rather:4 pn:3 barto:2 focus:1 likelihood:7 entire:3 diminishing:2 going:1 provably:1 among:1 equal:2 once:1 having:2 identical:2 cancel:1 nearly:1 future:4 parametrizes:1 others:1 cardinal:1 few:1 simultaneously:1 gamma:4 highly:1 swapping:1 chain:3 accurate:3 andy:1 allocates:1 continuing:1 re:2 complicates:2 instance:1 modeling:1 ar:9 successful:1 examining:1 stored:1 st:14 international:4 amherst:2 csail:2 off:3 precup:1 again:1 squared:1 choose:1 lambda:4 worse:1 return:93 performed:3 view:1 later:1 closed:1 pendulum:3 portion:1 reached:1 pthomas:1 start:2 contribution:1 minimize:1 square:2 variance:19 largely:1 bayesian:2 carlo:6 trajectory:15 bhatnagar:1 rstd:2 konidaris:3 frequency:1 associated:1 con:1 sampled:1 massachusetts:1 car:2 actually:1 nstep:1 improved:1 formulation:4 though:1 just:2 implicit:1 until:2 hand:2 replacing:2 incrementally:1 defines:1 mdp:4 normalized:1 unbiased:3 true:5 swing:1 equality:1 hence:1 discounting:1 entering:1 deal:1 self:1 eligibility:3 occasion:1 performs:3 common:2 absorbing:1 empirically:1 approximates:1 accumulate:1 cambridge:2 automatic:1 gdk:1 longer:1 recent:1 perspective:1 discard:3 store:6 certain:1 minimum:1 george:2 somewhat:1 additional:1 ministry:1 accomplishes:1 full:1 long:1 coded:1 rsmc:1 prediction:2 regression:1 normalization:2 whereas:2 want:1 crucial:1 allocated:1 biased:2 eliminates:4 tend:2 member:1 call:1 near:1 intermediate:3 mahadevan:1 perfectly:1 reduce:1 avenue:2 expression:1 downey:1 rsi:1 action:8 useful:1 discount:3 exist:1 affords:1 problematic:1 singapore:2 notice:1 estimated:2 per:1 track:1 write:1 discrete:2 shall:1 visitation:1 thereafter:1 salient:1 four:1 nevertheless:1 backward:2 geometrically:1 fraction:2 sum:7 inverse:1 fourth:1 family:12 almost:1 doya:1 bound:1 replaces:1 occur:2 infinity:1 ri:3 fourier:2 min:1 relatively:1 according:4 slightly:1 heart:1 equation:14 end:17 available:1 operation:1 progression:1 alternative:1 batch:2 substitute:1 thomas:2 original:4 denotes:1 cf:2 remaining:2 unchanged:1 objective:1 spike:1 blend:2 occurs:1 primary:1 rt:10 usual:1 gradient:4 thank:1 capacity:1 philip:1 toward:1 reason:1 assuming:2 ru:2 length:17 ratio:1 unfortunately:1 trace:5 design:1 implementation:1 policy:6 twenty:1 perform:1 vertical:1 observation:1 discarded:1 benchmark:4 finite:3 descent:2 truncated:1 immediate:1 gridworld:3 david:1 maei:3 able:2 bar:1 scott:2 max:1 memory:9 suitable:1 boyan:1 sanner:1 nth:1 improve:1 geometric:2 multiplication:1 afosr:2 expect:1 interesting:2 proportional:2 approximator:2 agent:1 elsewhere:2 supported:2 last:1 free:1 side:1 understand:1 taking:1 barrier:1 fifth:2 distributed:1 evaluating:2 transition:7 author:2 reinforcement:8 approximate:1 obtains:3 tuples:1 continuous:2 why:1 szepesvari:1 expanding:1 rearranging:1 obtaining:2 mse:7 complex:3 domain:11 reevaluating:1 backup:3 sridhar:1 lie:1 third:3 weighting:7 remained:1 specific:2 virtue:1 normalizing:3 gustavo:1 effectively:2 acrobot:2 likely:1 infinitely:1 unexpected:1 relies:1 ma:3 goal:4 onestep:1 infinite:1 except:1 corrected:1 uniformly:1 averaging:1 called:2 total:1 correlated:2 |
3,836 | 4,473 | Hierarchical Matching Pursuit for Image
Classification: Architecture and Fast Algorithms
Liefeng Bo
University of Washington
Seattle WA 98195, USA
Xiaofeng Ren
ISTC-Pervasive Computing Intel Labs
Seattle WA 98195, USA
Dieter Fox
University of Washington
Seattle WA 98195, USA
Abstract
Extracting good representations from images is essential for many computer vision tasks. In this paper, we propose hierarchical matching pursuit (HMP), which
builds a feature hierarchy layer-by-layer using an efficient matching pursuit encoder. It includes three modules: batch (tree) orthogonal matching pursuit, spatial
pyramid max pooling, and contrast normalization. We investigate the architecture
of HMP, and show that all three components are critical for good performance.
To speed up the orthogonal matching pursuit, we propose a batch tree orthogonal
matching pursuit that is particularly suitable to encode a large number of observations that share the same large dictionary. HMP is scalable and can efficiently
handle full-size images. In addition, HMP enables linear support vector machines
(SVM) to match the performance of nonlinear SVM while being scalable to large
datasets. We compare HMP with many state-of-the-art algorithms including convolutional deep belief networks, SIFT based single layer sparse coding, and kernel
based feature learning. HMP consistently yields superior accuracy on three types
of image classification problems: object recognition (Caltech-101), scene recognition (MIT-Scene), and static event recognition (UIUC-Sports).
1
Introduction
Visual recognition is a major focus of research in computer vision, machine learning, and robotics.
Many real world vision systems fundamentally rely on the ability to recognize object instances,
categories, scenes, and activities. In the past few years, more and more people have realized that
the core of building recognition systems is to learn meaningful representations (features) from highdimensional observations such as images and videos. A growing amount of research on visual
recognition has focused on learning rich features using modern machine learning methods.
Deep belief nets [9] built a hierarchy of features by greedily training each layer separately using
the restricted Boltzmann machine. The learned weights are then used to initialize multi-layer feedforward networks that further adjust the weights to the task at hand using supervision. To handle
full-size images, Lee et al. [16] proposed convolutional deep belief networks (CDBN) that use a
small receptive field and share the weights between the hidden and visible layers among all locations in an image. Invariant predictive sparse decomposition [11, 13] used feed-forward neural
networks to approximate sparse codes generated by sparse coding and avoided solving computationally expensive optimizations at runtime. Deconvolutional networks [26] reconstructed images
using a group of latent feature maps in a convolutional way under a sparsity constraint. A fast optimization algorithm was introduced to solve the resulting sparse coding problem. These approaches
1
have been shown to yield competitive performance with the SIFT based bag-of-visual-words model
on object recognition benchmarks such as Caltech-101.
Recent research has shown that single layer sparse coding on top of SIFT features works surprisingly
well [15, 24, 23, 5, 6]. Yang et al. [24] proposed a single layer feature learning model ScSPM that
uses SIFT features as the input to sparse coding instead of raw image patches. Their experiments
have shown that this approach outperforms the classical bag-of-visual-words model and convolutional deep belief networks, and achieves the state-of-the-art performance on many image classification benchmarks. Wang et al. presented a fast implementation of local coordinate coding [23]
that obtains sparse representations of SIFT features by performing local linear embedding on several
nearest visual words in the codebook. Boureau et al. [5] compared many feature learning algorithms,
and found that the SIFT based sparse coding in conjunction with max pooling performs remarkably
well, and the macrofeatures can boost recognition performance further. Coates and Ng [6] evaluated
many single layer feature learning systems by decomposing feature learning algorithms into training
and encoding phases, and suggested that the choice of architecture and encoder is the key to a successful feature learning system. Very recently, Yu et al. [25] showed that hierarchical sparse coding
(HSC) at pixel level achieves similar performance with SIFT based sparse coding.
However, single layer sparse coding heavily depends on hand-crafted SIFT features. It is desirable to develop efficient and effective algorithms to learn features from scratch. Motivated by the
recent work on deep networks, in this work we propose hierarchical matching pursuit (HMP) that
uses the matching pursuit encoder to build a feature hierarchy layer by layer. The matching pursuit
encoder consists of three modules: batch tree orthogonal matching pursuit coding, spatial pyramid
max pooling, and contrast normalization. We discuss the architecture of HMP, and show that spatial
pyramid max pooling, contrast normalization, and hierarchical structure are key components to learn
good representations for recognition. We further present batch tree orthogonal matching pursuit that
is able to speed up the search of sparse codes significantly when a large number of observations
share the same dictionary. Our CPU implementation of HMP can extract the features from a typical
300 ? 300 image in less than one second. Our experiments on object recognition, scene recognition, and static event recognition confirm that HMP yields better accuracy than hierarchical feature
learning, SIFT based single layer sparse coding, and many other state-of-the-art image classification algorithms on standard datasets. To the best of our knowledge, this is the first work to show
that learning features from the pixel level significantly outperforms those approaches built on top of
hand-crafted SIFT.
2
Hierarchical Matching Pursuit
In this section, we introduce hierarchical matching pursuit. We first show how K-SVD is used to
learn the dictionary. We then propose the matching pursuit encoder, and investigate its architecture
and fast algorithms to compute sparse codes. Finally, we discuss how to build hierarchical matching
pursuit based on the matching pursuit encoder.
2.1
Dictionary Learning with K-SVD
K-SVD is a simple and efficient dictionary learning algorithm developed by Aharon et al. [1, 21].
K-SVD generalizes the idea of K-Means and updates the dictionary sequentially. Given a set of
h-dimensional observations Y = [y1 , ? ? ? , yn ] ? Rh?n (image patches in our case), K-SVD learns
a dictionary D = [d1 , ? ? ? , dm ] ? Rh?m , where di is called a filter (or atom), and an associated
sparse code matrix X = [x1 , ? ? ? , xn ] ? Rm?n by minimizing the following reconstruction error
min kY ? DXk2F
s.t. ?i, kxi k0 ? K
D,X
(1)
where the notation kAkF denotes the Frobenius norm, xi are the columns of X, the zero-norm k ? k0
counts the non-zero entries in the sparse code xi , and K is the sparsity level, which bounds the
number of the non-zero entries.
This optimization problem can be solved in an alternating manner. In the first stage, D is fixed, and
only the sparse code matrix is optimized. This problem can be decoupled to n simpler sub-problems
min kyi ? Dxi k2
s.t. kxi k0 ? K
xi
2
(2)
Algorithm 1: Batch Orthogonal Matching Pursuit (BOMP)
1. Input: Dictionary D, observation y, and the desired sparsity level K
2. Output: Sparse code x such that y ? Dx
3. Initialization: I = ?, ?0 = D> y, G = D> D, and x = 0
4. For k = 1 : K
5.
Selecting the new filter: k = argmaxk |?k |
6.
I =I ?k
7.
0
Updating the sparse code: xI = G?1
II ?I
8.
Updating ?: ? = ?0 ? GI xI
9. End
This optimization problem is combinational and highly non-convex, but its approximate solution
can be found by the orthogonal matching pursuit discussed in the next section. In the second stage,
the dictionary D and its associated sparse coefficients are updated simultaneously by the Singular
Value Decomposition (SVD). For a given filter k, the quadratic term in (1) can be rewritten as
X
> 2
> 2
kY ? DXk2F = kY ?
dj x>
(3)
j ? dk xk kF = kEk ? dk xk kF
j6=k
>
where x>
j are the rows of X, and Ek is the residual matrix for the k-th filter. The optimal dk and xk
can be obtained by performing SVD of the matrix Ek . To avoid the introduction of new non-zero
entries in the sparse code matrix X, the update process only uses the observations whose sparse
codes have used the k-th filter (the k-th entry of the associated sparse code is non-zero). When the
sparsity level K is set to be 1 and the sparse code matrix is forced to be a binary(0/1) matrix, K-SVD
exactly reproduces the K-Means algorithm.
2.2
Matching Pursuit Encoder
Our matching pursuit encoder consists of three modules: batch tree orthogonal matching pursuit,
spatial pyramid max pooling, and contrast normalization.
Batch Tree Orthogonal Matching Pursuit. The orthogonal matching pursuit (OMP) [19] computes an approximate solution for the optimization problem Eq.(2) in a greedy style. At each step,
it selects the filter with the highest correlation to the current residual. At the first step, the residual
is exactly the observation. Once the new filter is selected, the observation is orthogonally projected
to the span of all the previously selected filters and the residual is recomputed. This procedure is
repeated until the desired K filters are selected. The quantities in the sparse code update need not
be computed from scratch. The vector DI> y can be incrementally updated by simply appending a
new entry Dk> y, where DI denotes the sub-matrix of D containing the columns indexed by I. The
inversion of the matrix (DI> DI )?1 can be obtained using a progressive Cholesky factorization that
updates the matrix inversion incrementally.
In our application, sparse codes for a large number of image patches are computed by the same dictionary. The total cost of orthogonal matching pursuit can be reduced by batch orthogonal matching
pursuit (BOMP) (Algorithm 1) that pre-computes some quantities [7, 22]. The key finding is that
filter selection, the most expensive step, doesn?t require x and r explicitly. Let ? = D> r, we have
0
? = D> r = D> (y ? DI (DI> DI )?1 DI> y) = ?0 ? GI G?1
II ?I
(4)
where we have set ?0 = D> y and G = D> D, and GII is the sub-matrix of G containing the
rows indexed by I and the columns indexed by I. Equation (4) indicates that if ?0 and G are precomputed, the cost of updating ? is O(mK), instead of O(mh). In orthogonal matching pursuit, we
have K ? h since the h filters allow us to exactly reconstruct the observations. Note that the cost
of searching sparse codes quickly dominates that of the pre-computation as observations increase.
When using an overcomplete dictionary, K is usually much less than h. In our experiments, K is
10 and h is several hundreds in the second layer of HMP, and we have observed significant speedup
(Section 3) over orthogonal matching pursuit.
3
Algorithm 2: Batch Tree Orthogonal Matching Pursuit (BTOMP)
1. Input: Dictionary D, Centers C, observation y, and the desired sparsity level K
2. Output: Sparse code x such that y ? Dx
3. Initialization: I = ?, r = y, ? = ?0 = C > y, B = C > D, and x = 0
4. For k = 1 : K
5.
Choosing the sub-dictionary gj : j = argmaxk |?k |
6.
Selecting the new filter: k = argmaxk?gj |d>
k r|
7.
I =I ?k
8.
Updating the sparse code: xI = (DI> DI )?1 DI> y
9.
Updating ?: ? = ?0 ? BI xI
10.
Computing the residual: r = y ? DI xI
11. End
Pre-computing G takes O(m2 h) time and O(m2 ) memory, which becomes infeasible for a very large
dictionary. To overcome this problem, we propose batch tree orthogonal matching pursuit (BTOMP)
(Algorithm 2) that organizes the dictionary using a tree structure. BTOMP uses K-Means to group
the dictionary into the o sub-dictionaries {Dg1 , ? ? ? , Dgo }, and associates the sub-dictionaries with
the learned centers C = [c1 , ? ? ? , co ]. The filter is selected in two steps: (1) select the center that best
matches the current residual and (2) choose the filter within the sub-dictionary associated with this
center. BTOMP reduces the cost of the filter selection to O(oK + mh
o ) and the memory to O(om).
BTOMP uses a tree based approximate algorithm to select the filter, and we have found that it works
well in practice. If o = m, BTOMP exactly recovers the batch orthogonal matching pursuit.
Spatial Pyramid Max Pooling. Spatial pyramid max pooling is a highly nonlinear operator that
generates higher level representations from sparse codes of local patches which are spatially close.
It aggregates these sparse codes using max pooling in a multi-level patch decomposition. At level
0, the decomposition consists of just a single spatial cell (whole patch). At level 1, the patch is
subdivided into four quadrants, yielding four feature vectors, and so on. Let U be the number of
pyramid levels, Vu the number of spatial cells in the u-th pyramid level, and P be an image cell,
then max pooling at the spatial cell P can be represented as
F (P ) = max |xj1 |, ? ? ? , max |xjh |
(5)
j?P
j?P
Concatenating max pooling features from different spatial cells, we have the patch-level feature:
F (P ) = [F (P11 ), ? ? ? , F (P1V1 ), ? ? ? , F (PUVU )].
Contrast Normalization. The magnitude of sparse codes varies over a wide range due to local variations in illumination and foreground-background contrast, so effective local contrast normalization
turns out to be essential for good recognition performance. We have compared two normalization
schemes: L1 normalization and L2 normalization and found that the latter is consistently better than
the former. For an image patch P , the L2 normalization has the form
F (P )
F (P ) = p
kF (P )k2 +
(6)
where is a small positive number. We have experimented with different values. We found that
the best value in the first layer is around 0.1. Image intensity is always normalized to [0, 1] in our
experiments. This is intuitive because a small threshold is able to make low contrast patches more
separate from high contrast image patches, increasing the discrimination of features. In the deeper
layers, recognition performance is robust to the value as long as it is small enough (for example
< 10?6 ).
2.3
Hierarchical Matching Pursuit
The matching pursuit encoder in the second layer is built on top of the outputs of the matching
pursuit encoder in the first layer. Training is accomplished in a greedy, layer-wise way: once a lower
4
Figure 1: Hierarchical Matching Pursuit. In the first layer, sparse codes from small image patches are
aggregated into patch-level features. In the second layer, sparse codes from patch-level features are
aggregated across the whole image to produce image-level features. Batch tree orthogonal matching
pursuit is used to compute sparse codes in each layer.
layer is trained, its dictionary is fixed, and its outputs are used as inputs to the next layer. We enforce
that the patch size in the second layer is larger than that in the first layer, which makes sure that a
higher level representation is extracted in the higher layer. More layers can be appended in a similar
way to produce deep representations.
3
Experiments
We compare hierarchical matching pursuit with many state-of-the-art image classification algorithms
on three publicly available datasets: Caltech101, MIT-Scene, and UIUC-Sports. Image intensity is
normalized to [0, 1]. All images are transformed into grayscale and resized to be no larger than
300 ? 300 pixels with preserved ratio.
We use two-layer hierarchical matching pursuit in all experiments. We have experimented with onelayer and three-layer HMP, but found that one-layer HMP is much worse than two-layer HMP while
three-layer HMP doesn?t improve recognition performance substantially. We learn the dictionary in
the two layers by performing K-SVD on 1,000,000 sampled patches. In the first layer, we remove
the zero frequency component from image patches by subtracting their means, and initialize K-SVD
with the overcomplete discrete cosine transform (DCT) dictionary. Our pre-processing is simpler
than other feature learning approaches that normalize image patches by dividing the standard deviation and then whitening the normalized image patches [6]. In the second layer, we initialize K-SVD
with randomly sampled patch features. We set the number of the filters to be 3 times the filter size
in the first layer and to be 1000 in the second layer. We use batch orthogonal matching pursuit to
compute sparse codes. We set the sparsity level K in the two layers to be 5 and 10, respectively.
We perform max pooling in a 3-level spatial pyramid, partitioned into 1 ? 1, 2 ? 2, and 4 ? 4
sub-regions. In the first layer, we run the matching pursuit encoder on 16 ? 16 image patches over
dense grids with a step size of 4 pixels. In the second layer, we run the matching pursuit encoder on
the whole image to produce the image-level features. For computational efficiency, we perform our
spatial pyramid max pooling across the image with a step size of 4 pixels, rather than at each pixel.
Given the high dimensionality of the learned features, we train linear SVM classifiers for image
classification. Our experiments show that the linear SVM matches the performance of a nonlinear
SVM with a histogram intersection kernel, which is consistent with the observations in [24, 5]. This
allows our system to scale to large datasets. The regularization parameter in linear SVM is fixed to
10 in all the experiments. The filter size in the first layer is optimized by 5-fold cross validation on
the training set.
We compare HMP to SIFT based single layer sparse coding because of its success in both computer
vision and machine learning communities [24, 23, 5, 6]. We extract SIFT with 16?16 image patches
over dense regular grids with spacing of 8 pixels. We use the publicly available dense SIFT code
at http://www.cs.unc.edu/?lazebnik [14]. We perform sparse coding feature extraction using 1,000
visual words learned from 1,000,000 SIFT features, and compute image-level features by running
spatial pyramid max pooling on 1 ? 1, 2 ? 2 and 4 ? 4 sub-regions [24].
5
Figure 2: Left: The overcomplete DCT dictionary with 144 filters of size 6 ? 6. Right: The
dictionary with 144 filters of size 6 ? 6 learned by K-SVD. It can be seen that the filters learned by
K-SVD is much more diverse than those generated by the overcomplete DCT.
Methods
DCT (orthogonal)
DCT (overcomplete)
K-SVD
3?3
69.9?0.6
69.6?0.6
71.8?0.5
4?4
70.8?0.3
71.8?0.6
74.4?0.6
5?5
71.5?1.0
73.0?0.7
75.9?0.7
6?6
72.1?0.7
74.1?0.4
76.8?0.4
7?7
73.2?0.4
73.7?0.6
76.3?0.4
8?8
73.1?0.7
73.4?0.8
76.1?0.5
Table 1: Classification accuracy with different filter sizes.
3.1
Object Recognition
Caltech-101 contains 9,144 images from 101 object categories and one background category. Following the standard experimental setting, we train models on 30 images and test on no more than 50
images per category.
Filter Size in the First Layer. We show recognition accuracy as a function of the filter size in
Table 1. The other parameters are fixed to the default values. We consider the orthogonal and
overcomplete DCT dictionaries, and the overcomplete K-SVD dictionary. We have found that the
orthogonal DCT achieves the highest accuracy when all the filters are chosen (without sparsity),
and the overcomplete DCT and K-SVD have good accuracy at the sparsity level T = 5. We keep
the overcomplete DCT dictionary and the K-SVD dictionary to have roughly similar sizes. From
Table 1, we see that the orthogonal DCT dictionary works surprisingly well, and is very competitive
with current state-of-the-art feature learning algorithms (see Table 3). The overcomplete K-SVD
dictionary performs consistently better than the DCT dictionary. The best filter size of K-SVD is
6 ? 6, which gives 76.8% accuracy on this dataset, about 3% higher than the overcomplete DCT.
We show the overcomplete DCT dictionary and the K-SVD dictionary in Fig. 2. As we see, the KSVD dictionary not only includes the edge and dot filters, but also texture, multi-peaked, and high
frequency filters, and is much more diverse than the overcomplete DCT dictionary.
Spatial Pyramid Pooling. Spatial pyramid max pooling introduces the different levels of spatial
information, and always outperforms flat spatial max pooling (4?4) by about 2% in our experiments.
Contrast Normalization. We evaluated HMP with and without contrast normalization. Our experiments show that contrast normalization improves recognition accuracy by about 3%, which suggests
this is a very useful module for feature learning.
Sparsity. We show recognition accuracy as a function of the sparsity level K in Fig. 3. The filter
size is 6 ? 6. When sparsity level in first or second layer varies, the other parameters are fixed to
the default setting. We see that the accuracy is more robust to the zero-norm in the first layer while
more sensitive in the second layer. The optimal K in the two layers is around 5 and 10, respectively.
Running Time. The total cost of learning the dictionary using K-SVD is less than two hour. BOMP
is about 10x faster in the second layer in our default setting, which dominates the running time
of feature extraction. All experiments are run on a single 3.30GHz Intel Xeon CPU with a single
thread. Efficient feature-sign search algorithm [15] is used to solve the sparse coding problem with
an L1 penalty. We compare the running cost of different algorithms for a typical 300 ? 300 image
in Table 2. HMP is much faster than single layer sparse coding and deconvolutional networks.
6
Figure 3: Left: Recognition accuracy as a function of zero-norm in the first layer. Right: Recognition accuracy as a function of zero-norm in the second layer.
Methods
Time (seconds)
HMP(DCT)
0.4
HMP(K-SVD)
0.8
SIFT+SC
22.4
DN
67.5
Table 2: Feature extraction time on a typical 300?300 image. HMP(DCT) means that the orthogonal
DCT dictionary is used in the first layer. HMP(K-SVD) means that the learned dictionary is used.
SIFT+SC denotes single layer sparse coding based on SIFT features.
Large Dictionary. We compared BTOMP and BOMP on a large dictionary with 10,000 filters in
the second layer. We found that BTOMP is about 5 times faster than BOMP when the number of
sub-groups is set to be 1000. BTOMP and BOMP have the almost same accuracy (77.2%), higher
than the standard setting (1000 filters in the second layer).
Comparisons with State-of-the-art Approaches. We compare HMP with recent single feature
based approaches in Table 3. In the first two columns, we see that HMP performs much better than
other hierarchial feature learning approaches: invariant predictive sparse decomposition (IPSD) [12,
13], convolutional deep belief networks (CDBN) [16], and deconvolutional networks (DN) [26]. In
the middle two columns, we show that HMP outperforms single layer sparse coding approaches
on top of SIFT features: soft threshold coding (SIFT+T) [6], locality-constrained linear coding
(LLC) [23] and Macrofeatures based sparse coding [5], and hierarchical sparse coding [25]. Notice
that LLC is the best-performing approach in the first ImageNet Large-scale Visual Recognition
Challenge [23]. In the right two columns, we compare HMP with naive Bayesian nearest neighbor
(NBNN) [4] and three representative kernel methods: spatial pyramid matching (SPM) [14], metric
learning for CORR kernel (ML+CORR) [10], and gradient kernel descriptors (KDES-G) [3, 2]. This
group of approaches are based on SIFT features except for gradient kernel descriptors that extract
patch-level features using weighted sum match kernels. Hierarchical matching pursuit is more than
10% better than SPM, a widely accepted baseline, and slightly better than NBNN and KDES-G in
terms of accuracy. To our best knowledge, our feature learning system has the highest accuracy
among single feature based approaches. Slightly higher accuracy (around 80%) has been reported
with multiple kernel learning that combines many different types of image features [8].
HMP
IPSD [12]
CDBN [16]
DN [26]
76.8?0.4
65.5
65.4
66.9?1.1
SIFT+T [6]
HSC [25]
LLC [23]
Macrofeatures [5]
67.7
74.0
73.4?0.5
75.7?1.1
SPM [14]
ML+CORR [10]
NBNN [4]
KDES-G [2]
64.4
69.6
73.0
75.2?0.4
Table 3: Comparisons on Caltech-101. Hierarchical matching pursuit is compared to recently published object recognition algorithms.
3.2
Scene Recognition
We evaluate hierarchical matching pursuit for scene recognition on the MIT-Scene dataset [20].
This dataset contain 15620 images from 67 indoor scene categories. All images have a minimum
resolution of 200 pixels in the smallest axis. This recognition task is very challenging since the large
in-class variability and small between-class variability in this dataset (see Figure 4). Following the
7
Figure 4: Sampled scene categories from the 67 indoor scene.
Methods
Accuracy
HMP
41.8
OB [18]
37.6
GIST [20]
22.0
ROI+GIST [20]
26.0
SIFT+SC
36.9
Table 4: Comparisons on the MIT-Scene dataset. OB denotes the object bank approach proposed
in [18]. ROI denotes region of interest. SIFT+SC has similar performance with SIFT+OMP.
standard experimental setting [20], we train models on 80 images and test on 20 images per category.
We report the accuracy of HMP over the training/test split provided on the authors?s website in
Table 4. HMP has an accuracy of 41.8% with the filter size 4 ? 4, more than 15 percent higher than
GIST features based approach, and about 5 percent higher than SIFT based sparse coding and object
bank. Object bank is a recently proposed high-level feature, which trains 200 object detectors using
the object bounding boxes from the LabelMe and ImageNet dataset, and runs them across an image
at different scales to produce image features. To the best of our knowledge, this accuracy is beyond
all previously published results on this data.
3.3
Event Recognition
We evaluate hierarchical matching pursuit for static event recognition on the UIUC-Sports
dataset [18]. This dataset consists of 8 sport event categories: rowing, badminton, polo, bocce,
snowboarding, croquet, sailing and rock climbing with 137 to 250 images in each. Following the
common experimental setting [18], we train models on 70 images and test on 60 images per category. We report the averaged accuracy of HMP over 10 random training/test splits in Table 5. The
optimal filter size is 4 ? 4. As we see, HMP significantly outperforms SIFT based generative graphical model, SIFT based single layer sparse coding, and the recent object bank approach significantly.
The accuracy obtained by HMP is the best published result on this dataset to date.
Methods
Accuracy
HMP
85.7?1.3
OB [18]
76.3
SIFT+GGM [17]
73.4
SIFT+SC
82.7?1.5
Table 5: Comparisons on the UIUC-Sports dataset. GGM denotes the generative graphical model
proposed in [17].
4
Conclusion
We have proposed hierarchical matching pursuit, to learn meaningful multi-level representations
from images layer by layer. Hierarchical matching pursuit uses the matching pursuit encoder to
build a feature hierarchy that consists of three modules: batch tree orthogonal matching pursuit,
spatial pyramid matching, and contrast normalization. Our system is scalable, and can efficiently
handle full-size images. In addition, we have proposed batch tree orthogonal matching pursuit to
speed up feature extraction at runtime. We have performed extensive comparisons on three types
of image classification tasks: object recognition, scene recognition, and event recognition. Our
experiments have confirmed that hierarchical matching pursuit outperforms both SIFT based single
layer sparse coding and other hierarchical feature learning approaches: convolutional deep belief
networks, convolutional neural networks and deconvolutional networks.
Acknowledgements. This work was funded in part by an Intel grant and by ONR MURI grants
N00014-07-1-0749 and N00014-09-1-1052.
8
References
[1] M. Aharon, M. Elad, and A. Bruckstein. K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Transactions on Signal Processing,
54(11):4311?4322, 2006.
[2] L. Bo, X. Ren, and D. Fox. Kernel Descriptors for Visual Recognition. In NIPS, 2010.
[3] L. Bo and C. Sminchisescu. Efficient Match Kernel between Sets of Features for Visual Recognition. In NIPS, 2009.
[4] O. Boiman, E. Shechtman, and M. Irani. In Defense of Nearest-Neighbor based Image Classification. In CVPR, 2008.
[5] Y. Boureau, F. Bach, Y. LeCun, and J. Ponce. Learning Mid-level Features for Recognition. In
CVPR, 2010.
[6] A. Coates and A. Ng. The Importance of Encoding versus Training with Sparse Coding and
Vector Quantization. In ICML, 2011.
[7] G. Davis, S. Mallat, and M. Avellaneda. Adaptive Greedy Approximations. Constructive
Approximation, 13(1):57?98, 1997.
[8] P. Gehler and S. Nowozin. On Feature Combination for Multiclass Object Classification. In
ICCV, 2009.
[9] G. Hinton, S. Osindero, and Y. Teh. A Fast Learning Algorithm for Deep Belief Nets. Neural
Computation, 18(7):1527?1554, 2006.
[10] P. Jain, B. Kulis, and K. Grauman. Fast Image Search for Learned Metrics. In CVPR, 2008.
[11] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the Best Multi-Stage Architecture for Object Recognition? In ICCV, 2009.
[12] K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun. Learning Invariant Features through
Topographic Filter Maps. In CVPR, 2009.
[13] K. Kavukcuoglu, P. Sermanet, Y. Boureau, K. Gregor, M. Mathieu, and Y. LeCun. Learning
Convolutional Feature Hierarchies for Visual Recognition. In NIPS. 2010.
[14] S. Lazebnik, C. Schmid, and J. Ponce. Beyond Bags of Features: Spatial Pyramid Matching
for Recognizing Natural Scene Categories. In CVPR, 2006.
[15] H. Lee, A. Battle, R. Raina, and A. Ng. Efficient Sparse Coding Algorithms. In NIPS. 2007.
[16] H. Lee, R. Grosse, R. Ranganath, and A. Ng. Convolutional Deep Belief Networks for Scalable
Unsupervised Learning of Hierarchical Representations. In ICML, 2009.
[17] L. Li and L. Fei-Fei. What, Where and Who? Classifying Event by Scene and Object Recognition. In ICCV, 2007.
[18] L. Li, H. Su, E. Xing, and L. Fei-Fei. Object Bank: A High-Level Image Representation for
Scene Classification and Semantic Feature Sparsification. In NIPS, 2010.
[19] Y. Pati, R. Rezaiifar, and P. Krishnaprasad. Orthogonal Matching Pursuit: Recursive Function
Approximation with Applications to Wavelet Decomposition. In The Twenty-Seventh Asilomar
Conference on Signals, Systems and Computers, pages 40?44, 1993.
[20] A. Quattoni and A. Torralba. Recognizing Indoor Scenes. In CVPR, 2009.
[21] R. Rubinstein, A. Bruckstein, and M Elad. Dictionaries for Sparse Representation Modeling.
Proceedings of the IEEE, 98(6):4311?4322, 2010.
[22] R. Rubinstein, M. Zibulevsky, and M. Elad. Efficient Implementation of the K-SVD Algorithm
using Batch Orthogonal Matching Pursuit. Technical report, 2008.
[23] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Guo. Locality-constrained Linear Coding for
Image Classification. In CVPR, 2010.
[24] J. Yang, K. Yu, Y. Gong, and T. Huang. Linear Spatial Pyramid Matching using Sparse Coding
for Image Classification. In CVPR, 2009.
[25] K. Yu, Y. Lin, and J. Lafferty. Learning Image Representations from the Pixel Level via
Hierarchical Sparse Coding. In CVPR, 2011.
[26] M. Zeiler, D. Krishnan, G. Taylor, and R. Fergus. Deconvolutional Networks. In CVPR, 2010.
9
| 4473 |@word kulis:1 middle:1 inversion:2 norm:5 decomposition:6 shechtman:1 contains:1 selecting:2 deconvolutional:5 past:1 outperforms:6 current:3 dx:2 dct:17 visible:1 enables:1 remove:1 gist:3 update:4 discrimination:1 greedy:3 selected:4 website:1 generative:2 xk:3 core:1 codebook:1 location:1 simpler:2 dn:3 ksvd:1 consists:5 combinational:1 combine:1 manner:1 introduce:1 roughly:1 uiuc:4 growing:1 multi:5 cpu:2 increasing:1 becomes:1 provided:1 notation:1 what:2 substantially:1 developed:1 finding:1 sparsification:1 runtime:2 exactly:4 grauman:1 rm:1 k2:2 classifier:1 bocce:1 grant:2 yn:1 positive:1 local:5 encoding:2 initialization:2 suggests:1 challenging:1 co:1 factorization:1 bi:1 range:1 snowboarding:1 averaged:1 jarrett:1 lecun:4 vu:1 practice:1 recursive:1 procedure:1 significantly:4 matching:56 word:4 pre:4 quadrant:1 regular:1 unc:1 close:1 selection:2 operator:1 www:1 map:2 center:4 convex:1 focused:1 resolution:1 m2:2 badminton:1 embedding:1 handle:3 searching:1 coordinate:1 variation:1 updated:2 hierarchy:5 heavily:1 mallat:1 us:6 designing:1 associate:1 recognition:37 particularly:1 expensive:2 updating:5 muri:1 gehler:1 observed:1 module:5 wang:2 solved:1 region:3 ranzato:2 highest:3 zibulevsky:1 trained:1 hsc:2 solving:1 predictive:2 efficiency:1 mh:2 k0:3 represented:1 train:5 forced:1 fast:6 effective:2 jain:1 sc:5 rubinstein:2 aggregate:1 choosing:1 whose:1 larger:2 solve:2 widely:1 elad:3 cvpr:10 reconstruct:1 encoder:13 ability:1 gi:2 topographic:1 transform:1 net:2 rock:1 propose:5 reconstruction:1 subtracting:1 ipsd:2 date:1 intuitive:1 frobenius:1 normalize:1 ky:3 seattle:3 produce:4 object:18 develop:1 gong:1 nearest:3 eq:1 dividing:1 c:1 filter:35 require:1 subdivided:1 around:3 roi:2 nbnn:3 rezaiifar:1 major:1 dictionary:42 achieves:3 smallest:1 torralba:1 bag:3 sensitive:1 macrofeatures:3 istc:1 weighted:1 mit:4 always:2 rather:1 avoid:1 resized:1 conjunction:1 pervasive:1 encode:1 focus:1 ponce:2 consistently:3 indicates:1 contrast:13 greedily:1 baseline:1 hidden:1 transformed:1 selects:1 pixel:9 krishnaprasad:1 classification:13 among:2 spatial:21 art:6 initialize:3 constrained:2 field:1 once:2 extraction:4 washington:2 ng:4 atom:1 kdes:3 progressive:1 yu:4 icml:2 unsupervised:1 foreground:1 peaked:1 report:3 fundamentally:1 few:1 modern:1 randomly:1 simultaneously:1 recognize:1 phase:1 interest:1 investigate:2 highly:2 cdbn:3 adjust:1 introduces:1 yielding:1 edge:1 orthogonal:28 fox:2 tree:13 decoupled:1 indexed:3 taylor:1 desired:3 overcomplete:14 mk:1 instance:1 column:6 xeon:1 soft:1 modeling:1 cost:6 deviation:1 entry:5 hundred:1 recognizing:2 successful:1 seventh:1 osindero:1 reported:1 varies:2 kxi:2 lee:3 quickly:1 containing:2 choose:1 huang:2 worse:1 ek:2 style:1 li:2 coding:30 includes:2 coefficient:1 explicitly:1 depends:1 performed:1 lab:1 competitive:2 xing:1 om:1 appended:1 ggm:2 accuracy:23 convolutional:9 kek:1 publicly:2 efficiently:2 descriptor:3 yield:3 boiman:1 who:1 climbing:1 raw:1 bayesian:1 kavukcuoglu:3 ren:2 confirmed:1 j6:1 published:3 detector:1 quattoni:1 frequency:2 dm:1 associated:4 di:13 dxi:1 static:3 recovers:1 sampled:3 dataset:10 knowledge:3 dimensionality:1 improves:1 feed:1 ok:1 higher:8 evaluated:2 box:1 just:1 stage:3 correlation:1 until:1 hand:3 su:1 nonlinear:3 liefeng:1 incrementally:2 spm:3 usa:3 building:1 xj1:1 normalized:3 contain:1 former:1 regularization:1 alternating:1 spatially:1 irani:1 semantic:1 davis:1 cosine:1 performs:3 l1:2 percent:2 image:58 wise:1 lazebnik:2 recently:3 superior:1 common:1 bomp:6 sailing:1 discussed:1 significant:1 dxk2f:2 grid:2 rowing:1 dj:1 dot:1 funded:1 supervision:1 gj:2 whitening:1 recent:4 showed:1 n00014:2 binary:1 success:1 onr:1 accomplished:1 caltech:4 seen:1 minimum:1 omp:2 hmp:34 dg1:1 aggregated:2 signal:2 ii:2 full:3 desirable:1 multiple:1 reduces:1 technical:1 match:5 faster:3 cross:1 long:1 bach:1 lin:1 scalable:4 vision:4 metric:2 croquet:1 histogram:1 normalization:14 kernel:10 pyramid:17 robotics:1 cell:5 c1:1 preserved:1 addition:2 remarkably:1 separately:1 background:2 spacing:1 singular:1 sure:1 pooling:16 gii:1 lafferty:1 extracting:1 yang:3 feedforward:1 split:2 enough:1 krishnan:1 architecture:6 idea:1 multiclass:1 thread:1 motivated:1 defense:1 penalty:1 deep:10 useful:1 amount:1 mid:1 category:10 reduced:1 http:1 coates:2 notice:1 sign:1 per:3 diverse:2 discrete:1 group:4 key:3 recomputed:1 four:2 threshold:2 p11:1 kyi:1 year:1 sum:1 run:4 almost:1 patch:23 ob:3 layer:61 bound:1 fold:1 quadratic:1 activity:1 constraint:1 fei:4 scene:17 xjh:1 flat:1 generates:1 speed:3 min:2 span:1 performing:4 speedup:1 combination:1 battle:1 across:3 slightly:2 partitioned:1 restricted:1 invariant:3 iccv:3 dieter:1 asilomar:1 computationally:1 equation:1 previously:2 discus:2 count:1 precomputed:1 turn:1 end:2 pursuit:52 decomposing:1 aharon:2 generalizes:1 rewritten:1 available:2 hierarchical:24 enforce:1 appending:1 batch:16 top:4 denotes:6 running:4 zeiler:1 graphical:2 hierarchial:1 build:4 classical:1 gregor:1 realized:1 quantity:2 receptive:1 gradient:2 separate:1 polo:1 code:25 pati:1 scspm:1 minimizing:1 ratio:1 sermanet:1 implementation:3 boltzmann:1 twenty:1 perform:3 teh:1 observation:12 datasets:4 benchmark:2 hinton:1 variability:2 y1:1 community:1 intensity:2 introduced:1 extensive:1 optimized:2 imagenet:2 learned:8 boost:1 hour:1 nip:5 avellaneda:1 able:2 suggested:1 beyond:2 usually:1 indoor:3 sparsity:11 challenge:1 built:3 max:17 including:1 video:1 belief:8 memory:2 critical:1 suitable:1 event:7 rely:1 natural:1 residual:6 raina:1 scheme:1 improve:1 orthogonally:1 mathieu:1 axis:1 argmaxk:3 extract:3 naive:1 schmid:1 l2:2 acknowledgement:1 kf:3 kakf:1 versus:1 lv:1 validation:1 consistent:1 bank:5 nowozin:1 share:3 classifying:1 row:2 caltech101:1 surprisingly:2 infeasible:1 allow:1 deeper:1 wide:1 neighbor:2 sparse:55 ghz:1 overcome:1 default:3 xn:1 world:1 llc:3 rich:1 computes:2 doesn:2 forward:1 author:1 adaptive:1 projected:1 avoided:1 transaction:1 ranganath:1 reconstructed:1 approximate:4 obtains:1 keep:1 confirm:1 ml:2 reproduces:1 sequentially:1 bruckstein:2 xi:8 fergus:2 grayscale:1 search:3 latent:1 table:12 learn:6 robust:2 sminchisescu:1 dense:3 rh:2 whole:3 bounding:1 repeated:1 x1:1 crafted:2 intel:3 fig:2 representative:1 grosse:1 sub:10 concatenating:1 learns:1 wavelet:1 xiaofeng:1 sift:30 dk:4 svm:6 experimented:2 dominates:2 essential:2 quantization:1 corr:3 importance:1 texture:1 magnitude:1 illumination:1 boureau:3 onelayer:1 locality:2 intersection:1 simply:1 visual:10 sport:5 bo:3 extracted:1 labelme:1 typical:3 except:1 called:1 total:2 accepted:1 svd:25 experimental:3 meaningful:2 organizes:1 select:2 highdimensional:1 support:1 people:1 cholesky:1 latter:1 guo:1 constructive:1 evaluate:2 d1:1 scratch:2 |
3,837 | 4,474 | Learning to Learn with Compound HD Models
Ruslan Salakhutdinov
Department of Statistics, University of Toronto
[email protected]
Joshua B. Tenenbaum
Brain and Cognitive Sciences, MIT
[email protected]
Antonio Torralba
CSAIL, MIT
[email protected]
Abstract
We introduce HD (or ?Hierarchical-Deep?) models, a new compositional learning architecture that integrates deep learning models with structured hierarchical
Bayesian models. Specifically we show how we can learn a hierarchical Dirichlet
process (HDP) prior over the activities of the top-level features in a Deep Boltzmann Machine (DBM). This compound HDP-DBM model learns to learn novel
concepts from very few training examples, by learning low-level generic features,
high-level features that capture correlations among low-level features, and a category hierarchy for sharing priors over the high-level features that are typical of
different kinds of concepts. We present efficient learning and inference algorithms
for the HDP-DBM model and show that it is able to learn new concepts from very
few examples on CIFAR-100 object recognition, handwritten character recognition, and human motion capture datasets.
1
Introduction
?Learning to learn?, or the ability to learn abstract representations that support transfer to novel
but related tasks, lies at the core of many problems in computer vision, natural language processing,
cognitive science, and machine learning. In typical applications of machine classification algorithms
today, learning curves are measured in tens, hundreds or thousands of training examples. For humans
learners, however, just one or a few examples are often sufficient to grasp a new category and make
meaningful generalizations to novel instances [25, 16]. The architecture we describe here takes a
step towards this ?one-shot learning? ability by learning several forms of abstract knowledge that
support transfer of useful representations from previously learned concepts to novel ones.
We call our architectures compound HD models, where ?HD? stands for ?Hierarchical-Deep?, because they are derived by composing hierarchical nonparametric Bayesian models with deep networks, two influential approaches from the recent unsupervised learning literature with complementary strengths. Recently introduced deep learning models, including Deep Belief Networks [5],
Deep Boltzmann Machines [14], deep autoencoders [10], and others [12, 11], have been shown to
learn useful distributed feature representations for many high-dimensional datasets. The ability to
automatically learn in multiple layers allows deep models to construct sophisticated domain-specific
features without the need to rely on precise human-crafted input representations, increasingly important with the proliferation of data sets and application domains.
While the features learned by deep models can enable more rapid and accurate classification learning, deep networks themselves are not well suited to one-shot learning of novel classes. All units
and parameters at all levels of the network are engaged in representing any given input and are adjusted together during learning. In contrast, we argue that one-shot learning of new classes will be
easier in architectures that can explicitly identify only a small number of degrees of freedom (latent
variables and parameters) that are relevant to the new concept being learned, and thereby achieve
more appropriate and flexible transfer of learned representations to new tasks. This ability is the
1
hallmark of hierarchical Bayesian (HB) models, recently proposed in computer vision, statistics,
and cognitive science [7, 25, 4, 13] for learning to learn from few examples. Unlike deep networks,
these HB models explicitly represent category hierarchies that admit sharing the appropriate abstract knowledge about the new class?s parameters via a prior abstracted from related classes. HB
approaches, however, have complementary weaknesses relative to deep networks. They typically
rely on domain-specific hand-crafted features [4, 1] (e.g. GIST, SIFT features in computer vision,
MFCC features in speech perception domains). Committing to the a-priori defined feature representations, instead of learning them from data, can be detrimental. Moreover, many HB approaches
often assume a fixed hierarchy for sharing parameters [17, 3] instead of learning the hierarchy in an
unsupervised fashion.
In this work we investigate compound HD (hierarchical-deep) architectures that integrate these deep
models with structured hierarchical Bayesian models. In particular, we show how we can learn a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a Deep Boltzmann Machine (DBM), coming to represent both a layered hierarchy of increasingly abstract features, and a tree-structured hierarchy of classes. Our model depends minimally on domain-specific
representations and achieves state-of-the-art one-shot learning performance by unsupervised discovery of three components: (a) low-level features that abstract from the raw high-dimensional sensory
input (e.g. pixels, or 3D joint angles); (b) high-level part-like features that express the distinctive
perceptual structure of a specific class, in terms of class-specific correlations over low-level features; and (c) a hierarchy of super-classes for sharing abstract knowledge among related classes. We
evaluate the compound HDP-DBM model on three different perceptual domains. We also illustrate
the advantages of having a full generative model, extending from highly abstract concepts all the
way down to sensory inputs: we can not only generalize class labels but also synthesize new examples in novel classes that look reasonably natural, and we can significantly improve classification
performance by learning parameters at all levels jointly by maximizing a joint log-probability score.
2
Deep Boltzmann Machines (DBMs)
A Deep Boltzmann Machine is a network of symmetrically coupled stochastic binary units. It contains a set of visible units v ? {0, 1}D , and a sequence of layers of hidden units h1 ? {0, 1}F1 ,
h2 ? {0, 1}F2 ,..., hL ? {0, 1}FL . There are connections only between hidden units in adjacent
layers, as well as between visible and hidden units in the first hidden layer. Consider a DBM with
three hidden layers1 (i.e. L = 3). The probability of a visible input v is:
X
X (2)
X (2)
1 X
(1)
exp
Wij vi h1j +
Wjl h1j h2l +
Wlm h2l h3m ,
(1)
P (v; ?) =
Z(?)
ij
h
1
2
jl
lm
3
(1)
where h = {h , h , h } are the set of hidden units, and ? = {W , W(2) , W(3) } are the model
parameters, representing visible-to-hidden and hidden-to-hidden symmetric interaction terms.
Approximate Learning: Exact maximum likelihood learning in this model is intractable, but efficient approximate learning of DBMs can be carried out by using a mean-field inference to estimate
data-dependent expectations, and an MCMC based stochastic approximation procedure to approximate the model?s expected sufficient statistics [14]. In particular, consider approximating the true
posterior P (h|v; ?) with a fully factorized approximating distribution over the three sets of hidden
QF1 QF2 QF3
1
2
3
1
2
3
units: Q(h|v; ?) = j=1
k=1
m=1 q(hj |v)q(hk |v)q(hm |v) where ? = {? , ? , ? } are the
mean-field parameters with q(hli = 1) = ?li for l = 1, 2, 3. In this case, we can write down the
variational lower bound on the log-probability of the data, which takes a particularly simple form:
>
>
log P (v; ?) ? v> W(1) ?1 + ?1 W(2) ?2 + ?2 W(3) ?2 ? log Z(?) + H(Q),
(2)
where H(?) is the entropy functional. Learning proceeds by finding the value of ? that maximizes
this lower bound for the current value of model parameters ?, which results in a set of the mean-field
fixed-point equations. Given the variational parameters ?, the model parameters ? are then updated
to maximize the variational bound using stochastic approximation (for details see [14, 22, 26]).
Multinomial DBMs: To allow DBMs to express more information and introduce more structured
hierarchical priors, we will use a conditional multinomial distribution to model activities of the toplevel units. Specifically, we will use M softmax units, each with ?1-of-K? encoding (so that each
1
For clarity, we use three hidden layers. Extensions to models with more than three layers is trivial.
2
?
Deep Boltzmann Machine
M replicated softmax units
with tied weights
over activities of
the top-level units
Multinomial unit
sampled M times
h2
h2
W2
W2
h1
h1
W1
W1
G(1)
c
(2)
?(2)
G(1)
c
Gk
G(1)
c
G(1)
c
G(1)
c
?(3)
Gn
Gn
Gn
Gn
Gn
?in
?in
?in
?in
?in
h3in
h3in
h3in
van
truck
h3in
h3in
M
v
v
?Vehicle?
(2)
Gk
W3
W3
Learned Hierarchy
of super-classes
G(3)
?(3)
?Animal?
h3
h3
H
HDP prior
cow N c
M
horse
car
3
Figure 1: Left: Multinomial DBM model: the top layer represents M softmax hidden units h , which share the
same set of weights. Middle: A different interpretation: M softmax units are replaced by a single multinomial
unit which is sampled M times. Right: Hierarchical Dirichlet Process prior over the states of h3 .
unit contains a set of K weights). All M separate softmax units will share the same set of weights,
connecting them to binary hidden units at the lower-level (Fig. 1). A key observation is that M
separate copies of softmax units that all share the same set of weights can be viewed as a single
multinomial unit that is samples M times [15, 19]. A pleasing property of using softmax units is that
the mathematics underlying the learning algorithm for binary-binary DBMs remains the same.
3
Compound HDP-DBM model
After a DBM model has been learned, we have an undirected model that defines the joint distribution P (v, h1 , h2 , h3 ). One way to express what has been learned is the conditional model
P (v, h1 , h2 |h3 ) and a prior term P (h3 ). We can therefore rewrite the variational bound as:
X
X
log P (v) ?
Q(h|v; ?) log P (v, h1 , h2 |h3 ) + H(Q) +
Q(h3 |v; ?) log P (h3 ). (3)
h1 ,h2 ,h3
h3
This particular decomposition lies at the core of the greedy recursive pretraining algorithm: we keep
the learned conditional model P (v, h1 , h2 |h3 ), but maximize the variational lower-bound of Eq. 3
with respect to the last term [5]. Instead of adding an additional undirected layer, (e.g. a restricted
Boltzmann machine), to model P (h3 ), we can place a hierarchical Dirichlet process prior over
h3 , that will allow us to learn category hierarchies, and more importantly, useful representations
of classes that contain few training examples. The part we keep, P (v, h1 , h2 |h3 ), represents a
conditional DBM model, which can be viewed as a two-layer DBM but with bias terms given by the
states of h3 :
X
X (2)
X (3)
1
(1)
1
1 2
2 3
P (v, h1 , h2 |h3 ) =
exp
W
v
h
+
W
h
h
+
W
h
h
(4)
i
j
ij
jl j l
lm l m .
Z(?, h3 )
ij
jl
3.1
lm
A Hierarchical Bayesian Topic Prior
In a typical hierarchical topic model, we observe a set of N documents, each of which is modeled
as a mixture over topics, that are shared among documents. Let there be K words in the vocabulary.
A topic t is a discrete distribution over K words with probability vector ?t . Each document n has
its own distribution over topics given by probabilities ?n .
In our compound HDP-DBM model, we will use a hierarchical topic model as a prior over the
activities of the DBM?s top-level features. Specifically, the term ?document? will refer to the toplevel multinomial unit h3 , and M ?words? in the document will represent the M samples, or active
DBM?s top-level features, generated by this multinomial unit. Words in each document are drawn
by choosing a topic t with probability ?nt , and then choosing a word w with probability ?tw . We
will often refer to topics as our learned higher-level features, each of which defines a topic specific
distribution over DBM?s h3 features. Let h3in be the ith word in document n, and xin be its topic:
?n |? ? Dir(??), ?t |? ? Dir(?? ), xin |?n ? Mult(?n ), h3in |xin , ?xin ? Mult(?xin ),
(5)
where ? is the global distribution over topics, ? is the global distribution over K words, and ? and
? are concentration parameters.
3
Let us further assume that we are presented with a fixed two-level category hierarchy. Suppose that
N documents, or objects, are partitioned into C basic level categories (e.g. cow, sheep, car). We
represent such partition by a vector zb of length N , each entry of which is znb ? {1, ..., C}. We
also assume that our C basic-level categories are partitioned into S super-categories (e.g. animal,
vehicle), represented by by a vector zs of length C, with zcs ? {1, ..., S}. These partitions define a
fixed two-level tree hierarchy (see Fig. 1). We will relax this assumption later.
The hierarchical topic model can be readily extended to modeling the above hierarchy. For each
document n that belong to the basic category c, we place a common Dirichlet prior over ?n with
(1)
parameters ?c . The Dirichlet parameters ? (1) are themselves drawn from a Dirichlet prior with
parameters ? (2) , and so on (see Fig. 1). Specifically, we define the following prior over h3 :
?s(2) |?g(3)
?
Dir(?(3) ?g3 ),
for each super-category s=1,..,S
(2)
?c(1) |?zcs
?
(2)
Dir(?(2) ?zcs ),
for each basic-category c = 1, .., C
?n |?zb
(1)
?
Dir(?(1) ?zb ), for each document n = 1, .., N
xin |?n
?t |?
? Mult(?n ),
? Dir(?? ),
n
(6)
(1)
n
for each word i = 1, .., M
h3in |xin , ?xin ? Mult(?xin ),
(1)
(2)
(3)
where ?g is the global distribution over topics, ?s is the super-category specific and ?c is the
class specific distribution over topics, or higher-level features. These high-level features, in turn,
define topic-specific distribution over h3 features, or ?words? in a DBM model.
For a fixed number of topics T , the above model represents a hierarchical extension of LDA. We
typically do not know the number of topics a-priori. It is therefore natural to consider a nonparametric extension based on the HDP model [21], which allows for a countably infinite number of topics.
In the standard hierarchical Dirichlet process notation, we have
(2)
(2)
(3)
(1)
(2)
G(3)
, G(3)
, Gzcs ),
g ? DP(?, Dir(?? )), Gs ? DP(?
g ), Gc ? DP(?
Gn ?
(1)
DP(?(1) , Gzb ),
n
(7)
??in |Gn ? Gn , h3in |??in ? Mult(??in ),
where Dir(?? ) is the base-distribution, and each ?? is a factor associated with a single observation h3in . Making use of topic index variables xin , we denote ??in = ?xin (see Eq. 6). Using a
P? (3)
P? (2)
(3)
(2)
stick-breaking representation we can write: Gg (?) = t=1 ?gt ??t , Gs (?) = t=1 ?st ??t ,
P? (1)
P?
(3)
Gc (?) = t=1 ?ct ??t , and Gn (?) = t=1 ?nt ??t that represent sums of point masses. We also
place Gamma priors over concentration parameters as in [21].
The overall generative model is shown in Fig. 1. To generate a sample we first draw M words, or
activations of the top-level features, from the HDP prior over h3 given by Eq. 7. Conditioned on h3 ,
we sample the states of v from the conditional DBM model given by Eq. 4.
3.2
Modeling the number of super-categories
So far we have assumed that our model is presented with a two-level partition z = {zs , zb }. If,
however, we are not given any level-1 or level-2 category labels, we need to infer the distribution
over the possible category structures. We place a nonparametric two-level nested Chinese Restaurant
Prior (CRP) [2] over z, which defines a prior over tree structures and is flexible enough to learn
arbitrary hierarchies. The main building block of the nested CRP is the Chinese restaurant process,
a distribution on partition of integers. Imagine a process by which customers enter a restaurant with
an unbounded number of tables, where the nth customer occupies a table k drawn from:
?
nk
, if nk > 0;
, if k is new},
(8)
P (zn = k|z1 , ..., zn?1 ) = {
n?1+?
n?1+?
where nk is the number of previous customers at table k and ? is the concentration parameter. The
nested CRP, nCRP(?), extends CRP to nested sequence of partitions, one for each level of the tree.
In this case each observation n is first assigned to the super-category zns using Eq. 8. Its assignment
to the basic-level category znb , that is placed under a super-category zns , is again recursively drawn
from Eq. 8. We also place a Gamma prior ?(1, 1) over ?. The proposed model allows for both: a
nonparametric prior over potentially unbounded number of global topics, or higher-level features,
as well as a nonparametric prior that allow learning an arbitrary tree taxonomy.
4
4
Inference
Inferences about model parameters at all levels of hierarchy can be performed by MCMC. When the
tree structure z of the model is not given, the inference process will alternate between fixing z while
sampling the space of model parameters, and vice versa.
Sampling HDP parameters: Given category assignment vectors z, and the states of the top-level
DBM features h3 , we use posterior representation sampler of [20]. In particular, the HDP sampler
(1)
(2)
(3)
maintains the stick-breaking weights {?}N
n=1 , and {?c , ?s , ?g }; and topic indicator variables x
(parameters ? can be integrated out). The sampler alternatives between: (a) sampling cluster indices
xin using Gibbs updates in the Chinese restaurant franchise (CRF) representation of the HDP; (b)
sampling the weights at all three levels conditioned on x using the usual posterior of a DP2 .
Sampling category assignments z: Given current instantiation of the stick-breaking weights, using
a defining property of a DP, for each input n, we have:
(1)
(1)
(?1,n , ..., ?T,n , ?new,n ) ? Dir(?(1) ?zn ,1 , ..., ?(1) ?zn ,T , ?(1) ?z(1)
)
n ,new
(9)
Combining the above likelihood term with the CRP prior (Eq. 8), the posterior over the category
assignment can be calculated as follows:
p(zn |?n , z?n , ? (1) ) ? p(?n |? (1) , zn )p(zn |z?n ),
(10)
where z?n denotes variables z for all observations other than n. When computing the probability of
placing ?n under a newly created category, its parameters are sampled from the prior.
Sampling DBM?s hidden units: Given the states of the DBM?s top-level multinomial unit h3 , conditional samples from P (h1n , h2n |h3n , vn ) can be obtained by running a Gibbs sampler that alternates
between sampling the states of h1n independently given h2n , and vice versa. Conditioned on topic
assignments xin and h2n , the states of the multinomial unit h3n for each input n are sampled using
Gibbs conditionals:
P (h3in |h2n , h3?in , xn ) ? P (h2n |h3n )P (h3in |xin ),
(11)
where the first term is given by the product of logistic functions (see Eq. 4):
P (h2 |h3 ) =
Y
P (h2l |h3 ), with P (h2l = 1|h3 ) =
l
1 + exp ?
1
P
m
(3)
Wlm h3m
,
(12)
and the second term P (h3in ) is given by the multinomial: Mult(?xin ) (see Eq. 7, in our conjugate
setting, parameters ? can be further integrated out).
Fine-tuning DBM: More importantly, conditioned on h3 , we can further fine-tune low-level DBM
parameters ? = {W(1) , W(2) , W(3) } by applying approximate maximum likelihood learning (see
section 2) to the conditional DBM model of Eq. 4. For the stochastic approximation algorithm, as
the partition function depends on the states of h3 , we maintain one ?persistent? Markov chain per
data point (for details see [22, 14]).
Making predictions: Given a test input vt , we can quickly infer the approximate posterior over h3t
using the mean-field of Eq. 2, followed by running the full Gibbs sampler to get approximate samples
from the posterior over the category assignments. In practice, for faster inference, we fix learned
topics ?t and approximate the marginal likelihood that h3t belongs to category zt by assuming that
(1)
3
document specific DP can be well
Z approximated by the class-specific DP Gt ? Gzt (see Fig. 1):
P (h3t |zt , G(1) , ?) =
Gt
3
(1)
P (h3t |?, Gt )P (Gt |G(1)
zt )dGt ? P (ht |?, Gzt ),
(13)
Combining this likelihood term with nCRP prior P (zt |z?t ) (Eq. 8) allows us to efficiently infer
approximate posterior over category assignments4 .
(2)
(1)
2
Conditioned on the draw of the super-class DP Gs and the state of the CRF, the posteriors over Gc
become independent. We can easily speed up inference by sampling from these conditionals in parallel.
(1)
(1)
3
We note that Gzt = E[Gt |Gzt ]
4
In all of our experimental results, computing this approximate posterior takes a fraction of a second.
5
1.
1st
DBM features
layer
2nd layer
HDP high-level features
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12
13
14
15
16
17
18
20
22
bed, chair, clock, couch, dinosaur, lawn mower, table,
telephone, television, wardrobe
bus, house, pickup truck, streetcar, tank, tractor, train
crocodile, kangaroo, lizard, snake, spider, squirrel
hamster, mouse, rabbit, raccoon, possum, bear
apple, orange, pear, sunflower, sweet pepper
baby, boy, girl, man, woman
dolphin, ray, shark, turtle, whale
otter, porcupine, shrew, skunk
beaver, camel, cattle, chimpanzee, elephant
fox, leopard, lion, tiger, wolf
maple tree, oak tree, pine tree, willow tree
flatfish, seal, trout, worm
butterfly, caterpillar, snail
bee, crab, lobster
bridge, castle, road, skyscraper
bicycle, keyboard, motorcycle, orchid, palm tree
bottle, bowl, can, cup, lamp
cloud, plate, rocket 19. mountain, plain, sea
poppy, rose, tulip 21. aquarium fish, mushroom
beetle, cockroach 23. forest
Figure 2: A random subset of the 1st , 2nd layer DBM features,
Figure 3: A typical partition of the 100
and higher-level class-sensitive HDP features/topics.
basic-level categories
5
Experiments
We present experimental results on the CIFAR-100 [8], handwritten character [9], and human motion
capture recognition datasets. For all datasets, we first pretrain a DBM model in unsupervised fashion
on raw sensory input (e.g. pixels, or 3D joint angles), followed by fitting an HDP prior, which is run
for 200 Gibbs sweeps. We further run 200 additional Gibbs steps in order to fine-tune parameters of
the entire compound HDP-DBM model. This was sufficient to reach convergence and obtain good
performance. Across all datasets, we also assume that the basic-level category labels are given,
but no super-category labels are available. The training set includes many examples of familiar
categories but only a few examples of a novel class. Our goal is to generalize well on a novel class.
In all experiments we compare performance of HDP-DBM to the following alternative models:
stand-alone Deep Boltzmann Machines, Deep Belief Networks [5], ?Flat HDP-DBM? model, that
always uses a single super-category, SVMs, and k-NN. The Flat HDP-DBM approach could potentially identify a set of useful high-level features common to all categories. Finally, to evaluate
performance of DBMs (and DBNs), we follow [14]. Note that using HDPs on top of raw sensory input (i.e. pixels, or even image-specific GIST features) performs far worse compared to HDP-DBM.
5.1
CIFAR-100 dataset
The CIFAR-100 image dataset [8] contains 50,000 training and 10,000 test images of 100 object
categories (100 per class), with 32 ? 32 ? 3 RGB pixels. Extreme variability in scale, viewpoint,
illumination, and cluttered background makes object recognition task for this dataset quite difficult.
Similar to [8], in order to learn good generic low-level features, we first train a two-layer DBM in
completely unsupervised fashion using 4 million tiny images5 [23]. We use a conditional Gaussian
distribution to model observed pixel values [8, 6]. The first DBM layer contained 10,000 binary
hidden units, and the second layer contained M=1000 softmax units, each defining a distribution
over 10, 000 second layer features6 . We then fit an HDP prior over h2 to the 100 object classes.
Fig. 2 displays a random subset of the 1st and 2nd layer DBM features, as well as higher-level classsensitive features, or topics, learned by the HDP model. To visualize a particular higher-level feature,
we first sample M words from a fixed topic ?t , followed by sampling RGB pixel values from the
conditional DBM model. While DBM features capture mostly low-level structure, including edges
and corners, the HDP features tend to capture higher-level structure, including contours, shapes,
color components, and surface boundaries. More importantly, features at all levels of the hierarchy
evolve without incorporating any image-specific priors. Fig. 3 shows a typical partition over 100
classes that our model learns with many super-categories containing semantically similar classes.
We next illustrate the ability of the HDP-DBM to generalize from a single training example of a
?pear? class. We trained the model on 99 classes containing 500 training images each, but only one
training example of a ?pear? class. Fig. 4 shows the kind of transfer our model is performing: the
model discovers that pears are like apples and oranges, and not like other classes of images, such as
dolphins, that reside in very different parts of the hierarchy. Hence the novel category can inherit
5
The dataset contains random images of natural scenes downloaded from the web
We also experimented with a 3-layer DBM model, as well as various softmax parameters: M = 500 and
M = 2000. The difference in performance was not significant.
6
6
Learning with 3 examples
Shared HDP high-level features
Shape
Color
HDP?DBM
DBM
SVM
0.9
0.8
CIFAR Dataset
2*AUROC?1
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
70
80
90
100
Sorted Class Index
1
HDP?DBM
DBM
SVM
0.9
0.8
2*AUROC?1
0.7
0.6
0.5
0.4
Characters Dataset
0.3
0.2
0.1
0
0
200
400
600
800
1000
1200
1400
Sorted Class Index
Figure 4: Left: Training examples along with eight most probable topics ?t , ordered by hand. Right: Performance of HDP-DBM, DBM, and SVMs for all object classes when learning with 3 examples. Object categories
are sorted by their performance.
Model
Tuned HDP-DBM
HDP-DBM
Flat HDP-DBM
DBM
DBN
SVM
1-NN
GIST
CIFAR Dataset
Handwritten Characters
Motion Capture
Number of examples
Number of examples
Number of examples
1
3
5
10
50
1
3
5
10
1
3
5
10
50
0.36
0.34
0.27
0.26
0.25
0.18
0.17
0.27
0.41
0.39
0.37
0.36
0.33
0.27
0.18
0.31
0.46
0.45
0.42
0.41
0.37
0.31
0.19
0.33
0.53
0.52
0.50
0.48
0.45
0.38
0.20
0.39
0.62
0.61
0.61
0.61
0.60
0.61
0.32
0.58
0.67
0.65
0.58
0.57
0.51
0.41
0.43
-
0.78
0.76
0.73
0.72
0.72
0.66
0.65
-
0.87
0.85
0.82
0.81
0.81
0.77
0.73
-
0.93
0.92
0.89
0.89
0.89
0.86
0.81
-
0.67
0.66
0.63
0.61
0.61
0.54
0.58
-
0.84
0.82
0.79
0.79
0.79
0.78
0.75
-
0.90
0.88
0.86
0.85
0.84
0.84
0.81
-
0.93
0.93
0.91
0.91
0.92
0.91
0.88
-
0.96
0.96
0.96
0.95
0.96
0.96
0.93
Table 1: Classification performance on the test set using 2*AUROC-1. The results in bold correspond to ROCs
that are statistically indistinguishable from the best (the difference is not statistically significant).
the prior distribution over similar high-level shape and color features, allowing the HDP-DBM to
generalize considerably better to new instances of the ?pear? class.
Table 1 quantifies performance using the area under the ROC curve (AUROC) for classifying 10,000
test images as belonging to the novel vs. all other 99 classes (we report 2*AUROC-1, so zero corresponds to the classifier that makes random predictions). The results are averaged over 100 classes
using ?leave-one-out? test format. Based on a single example, the HDP-DBM model achieves an
AUROC of 0.36, significantly outperforming DBMs, DBNs, SVMs, as well as 1-NN using standard
image-specific GIST features [24] that achieve an AUROC of 0.26, 0.25, 0.18 and 0.27 respectively.
Table 1 also shows that fine-tuning parameters of all layers jointly as well as learning super-category
hierarchy significantly improves model performance. As the number of training examples increases,
the HDP-DBM model still consistently outperforms alternative methods. Fig. 4 further displays performance of HDP-DBM, DBM, and SVM models for all object categories when learning with only
three examples. Observe that over 40 classes benefit in various degrees from learning a hierarchy.
5.2
Handwritten Characters
The handwritten characters dataset [9] can be viewed as the ?transpose? of MNIST. Instead of containing 60,000 images of 10 digit classes, the dataset contains 30,000 images of 1500 characters (20
examples each) with 28 ? 28 pixels. These characters are from 50 alphabets from around the world,
including Bengali, Cyrillic, Arabic, Sanskrit, Tagalog (see Fig. 5). We split the dataset into 15,000
training and 15,000 test images (10 examples of each class). Similar to the CIFAR dataset, we pretrain a two-layer DBM model, with the first layer containing 1000 hidden units, and the second layer
containing M=100 softmax units, each defining a distribution over 1000 second layer features.
Fig. 2 displays a random subset of training images, along with the 1st and 2nd layer DBM features,
as well as higher-level class-sensitive HDP features. The HDP features tend to capture higher-level
parts, many of which resemble pen ?strokes?. Table 1 further shows results for classifying 15,000
test images as belonging to the novel vs. all other 1,499 character classes. The HDP-DBM model
significantly outperforms other methods, particularly when learning characters with few training
examples. Fig. 6 further displays learned super-classes along with examples of entirely novel characters that have been generated by the model for the same super-class, as well as conditional samples
7
Training samples
DBM features
1st layer
2nd layer
HDP high-level features
Figure 5: A random subset of the training images along with 1st and 2nd layer DBM features, as well as
higher-level class-sensitive HDP features/topics.
Learned Super-Classes (by row)
Learning with 3 examples
Sampled Novel Characters Training Examples
Conditional Samples
Figure 6: Left: Learned super-classes along with examples of novel characters, generated by the model for
the same super-class. Right: Three training examples along with 8 conditional samples.
when learning only with three training examples. (we note that using Deep Belief Networks instead
of DBMs produced far inferior generative samples). Remarkably, many samples look realistic, containing coherent, long-range structure, while at the same time being different from existing training
images (see Supplementary Materials for a much richer class of generated samples).
5.3
Motion capture
We next applied our model to human motion capture data consisting of sequences of 3D joint angles
plus body orientation and translation [18]. The dataset contains 10 walking styles, including normal,
drunk, graceful, gangly, sexy, dinosaur, chicken, old person, cat, and strong. There are 2500 frames
of each style at 60fps, where each time step was represented by a vector of 58 real-valued numbers.
The dataset was split at random into 1500 training and 1000 test frames of each style. We further
preprocessed the data by treating each window of 10 consecutive frames as a single 58 ? 10 = 580d data vector. For the two-layer DBM model, the first layer contained 500 hidden units, with the
second layer containing M =50 softmax units, each defining a distribution over 500 second layer
features. As expected, Table 1 shows that the HDP-DBM model performs much better compared
to other models when discriminating between existing nine walking styles vs. novel walking style.
The difference is particularly large in the regime when we observe only a handful number of training
examples of a novel walking style.
6
Conclusions
We developed a compositional architecture that learns an HDP prior over the activities of top-level
features of the DBM model. The resulting compound HDP-DBM model is able to learn low-level
features from raw sensory input, high-level features, as well as a category hierarchy for parameter
sharing. Our experimental results show that the proposed model can acquire new concepts from
very few examples in a diverse set of application domains. The compositional model considered in
this paper was directly inspired by the architecture of the DBM and HDP, but it need not be. Indeed,
any other deep learning module, including Deep Belief Networks, sparse auto-encoders, or any
other hierarchical Bayesian model can be adapted. This perspective opens a space of compositional
models that may be more suitable for capturing the human-like ability to learn from few examples.
Acknowledgments: This research was supported by NSERC, ONR (MURI Grant 1015GNA126),
ONR N00014-07-1-0937, ARO W911NF-08-1-0242, and Qualcomm.
8
References
[1] E. Bart, I. Porteous, P. Perona, and M. Welling. Unsupervised learning of visual taxonomies.
In CVPR, pages 1?8, 2008.
[2] David M. Blei, Thomas L. Griffiths, and Michael I. Jordan. The nested chinese restaurant
process and bayesian nonparametric inference of topic hierarchies. J. ACM, 57(2), 2010.
[3] Kevin R. Canini and Thomas L. Griffiths. Modeling human transfer learning with the hierarchical dirichlet process. In NIPS 2009 workshop: Nonparametric Bayes, 2009.
[4] Li Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Trans.
Pattern Analysis and Machine Intelligence, 28(4):594?611, April 2006.
[5] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural
Computation, 18(7):1527?1554, 2006.
[6] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504 ? 507, 2006.
[7] C. Kemp, A. Perfors, and J. Tenenbaum. Learning overhypotheses with hierarchical Bayesian
models. Developmental Science, 10(3):307?321, 2006.
[8] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report,
Dept. of Computer Science, University of Toronto, 2009.
[9] Brenden Lake, Ruslan Salakhutdinov, Jason Gross, and Josh Tenenbaum. One-shot learning
of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive
Science Society, 2011.
[10] H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin. Exploring strategies for training deep
neural networks. Journal of Machine Learning Research, 10:1?40, 2009.
[11] Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. Convolutional deep belief
networks for scalable unsupervised learning of hierarchical representations. In Proceedings of
the 26th International Conference on Machine Learning, pages 609?616, 2009.
[12] M. A. Ranzato, Y. Boureau, and Y. LeCun. Sparse feature learning for deep belief networks.
Advances in Neural Information Processing Systems, 2008.
[13] A. Rodriguez, D. Dunson, and A. Gelfand. The nested Dirichlet process. Journal of the
American Statistical Association, 103:11311144, 2008.
[14] R. R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In Proceedings of the
International Conference on Artificial Intelligence and Statistics, volume 12, 2009.
[15] R. R. Salakhutdinov and G. E. Hinton. Replicated softmax: an undirected topic model. In
Advances in Neural Information Processing Systems, volume 22, 2010.
[16] L.B. Smith, S.S. Jones, B. Landau, L. Gershkoff-Stowe, and L. Samuelson. Object name
learning provides on-the-job training for attention. Psychological Science, pages 13?19, 2002.
[17] E. B. Sudderth, A. Torralba, W. T. Freeman, and A. S. Willsky. Describing visual scenes using
transformed objects and parts. International Journal of Computer Vision, 77(1-3):291?330,
2008.
[18] G. Taylor, G. E. Hinton, and S. T. Roweis. Modeling human motion using binary latent variables. In Advances in Neural Information Processing Systems. MIT Press, 2006.
[19] Y. W. Teh and G. E. Hinton. Rate-coded restricted Boltzmann machines for face recognition.
In Advances in Neural Information Processing Systems, volume 13, 2001.
[20] Y. W. Teh and M. I. Jordan. Hierarchical Bayesian nonparametric models with applications. In
Bayesian Nonparametrics: Principles and Practice. Cambridge University Press, 2010.
[21] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical dirichlet processes. Journal
of the American Statistical Association, 101(476):1566?1581, 2006.
[22] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood
gradient. In ICML. ACM, 2008.
[23] A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: a large dataset for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 30(11):1958?1970, 2008.
[24] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008.
[25] Fei Xu and Joshua B. Tenenbaum. Word learning as bayesian inference. Psychological Review,
114(2), 2007.
[26] L. Younes. On the convergence of Markovian stochastic algorithms with rapidly decreasing
ergodicity rates, March 17 2000.
9
| 4474 |@word arabic:1 middle:1 nd:6 seal:1 open:1 rgb:2 decomposition:1 thereby:1 shot:6 recursively:1 contains:6 score:1 tuned:1 document:11 outperforms:2 existing:2 current:2 nt:2 activation:1 mushroom:1 readily:1 visible:4 partition:8 trout:1 realistic:1 shape:3 treating:1 gist:4 update:1 wlm:2 v:3 alone:1 generative:3 greedy:1 bart:1 beaver:1 intelligence:3 lamp:1 ith:1 smith:1 core:2 blei:2 provides:1 toronto:3 oak:1 unbounded:2 along:6 become:1 persistent:1 fps:1 fitting:1 ray:1 introduce:2 indeed:1 znb:2 expected:2 themselves:2 rapid:1 proliferation:1 brain:1 salakhutdinov:5 inspired:1 freeman:2 decreasing:1 automatically:1 landau:1 window:1 moreover:1 underlying:1 maximizes:1 factorized:1 notation:1 mass:1 what:1 mountain:1 kind:2 z:2 developed:1 finding:1 classifier:1 stick:3 unit:33 grant:1 encoding:1 plus:1 minimally:1 snail:1 range:1 tulip:1 wjl:1 statistically:2 averaged:1 acknowledgment:1 lecun:1 recursive:1 block:1 practice:2 digit:1 procedure:1 area:1 significantly:4 mult:6 word:12 road:1 griffith:2 get:1 layered:1 applying:1 customer:3 maximizing:1 mower:1 maple:1 attention:1 independently:1 rabbit:1 cluttered:1 importantly:3 lamblin:1 hd:5 updated:1 hierarchy:20 today:1 suppose:1 imagine:1 exact:1 dbns:2 us:1 synthesize:1 recognition:8 particularly:3 approximated:1 walking:4 muri:1 database:1 observed:1 cloud:1 module:1 capture:9 utstat:1 thousand:1 ranzato:1 rose:1 gross:1 developmental:1 trained:1 rewrite:1 distinctive:1 f2:1 learner:1 completely:1 girl:1 easily:1 joint:5 bowl:1 represented:2 various:2 cat:1 alphabet:1 train:2 committing:1 describe:1 couch:1 fast:1 perfors:1 artificial:1 horse:1 kevin:1 choosing:2 aquarium:1 kangaroo:1 quite:1 richer:1 supplementary:1 valued:1 cvpr:1 gelfand:1 relax:1 elephant:1 ability:6 statistic:4 qualcomm:1 jointly:2 butterfly:1 features6:1 beal:1 advantage:1 sequence:3 shrew:1 net:1 aro:1 interaction:1 coming:1 product:1 relevant:1 combining:2 motorcycle:1 rapidly:1 achieve:2 roweis:1 bed:1 dolphin:2 convergence:2 cluster:1 extending:1 sea:1 franchise:1 leave:1 object:12 illustrate:2 andrew:1 fixing:1 measured:1 ij:3 h3:32 job:1 eq:12 strong:1 resemble:1 larochelle:1 stochastic:5 human:8 dbms:8 enable:1 occupies:1 material:1 f1:1 generalization:1 fix:1 probable:1 adjusted:1 extension:3 leopard:1 exploring:1 squirrel:1 crab:1 around:1 considered:1 normal:1 exp:3 bicycle:1 dbm:63 visualize:1 lm:3 pine:1 achieves:2 consecutive:1 torralba:5 ruslan:2 integrates:1 label:4 bridge:1 sensitive:3 vice:2 mit:5 always:1 gaussian:1 super:18 hj:1 flatfish:1 derived:1 consistently:1 likelihood:6 hk:1 contrast:1 pear:5 pretrain:2 inference:9 dependent:1 nn:3 typically:2 integrated:2 snake:1 entire:1 hidden:17 perona:2 wij:1 transformed:1 layers1:1 willow:1 pixel:7 overall:1 among:3 flexible:2 classification:4 tank:1 priori:2 orientation:1 animal:2 art:1 softmax:12 orange:2 marginal:1 field:4 construct:1 having:1 ng:1 sampling:9 whale:1 represents:3 placing:1 look:2 unsupervised:7 jones:1 icml:1 others:1 report:2 few:9 sweet:1 gamma:2 familiar:1 replaced:1 consisting:1 maintain:1 hamster:1 freedom:1 pleasing:1 investigate:1 highly:1 grasp:1 sheep:1 weakness:1 mixture:1 extreme:1 sexy:1 ncrp:2 chain:1 accurate:1 rajesh:1 edge:1 fox:1 tree:11 old:1 taylor:1 psychological:2 instance:2 modeling:4 gn:9 markovian:1 w911nf:1 zn:9 assignment:6 entry:1 subset:4 hundred:1 wardrobe:1 krizhevsky:1 osindero:1 encoders:1 dir:9 considerably:1 st:7 person:1 international:3 discriminating:1 csail:1 lee:1 michael:1 together:1 connecting:1 quickly:1 mouse:1 w1:2 again:1 containing:7 woman:1 worse:1 cognitive:4 admit:1 castle:1 corner:1 style:6 american:2 li:2 bold:1 includes:1 explicitly:2 rocket:1 depends:2 vi:1 vehicle:2 h1:10 later:1 performed:1 jason:1 bayes:1 maintains:1 parallel:1 qf2:1 convolutional:1 efficiently:1 correspond:1 identify:2 samuelson:1 generalize:4 bayesian:11 handwritten:5 raw:4 hli:1 produced:1 mfcc:1 apple:2 stroke:1 reach:1 sharing:5 lobster:1 associated:1 sampled:5 newly:1 dataset:14 knowledge:3 car:2 color:3 improves:1 dimensionality:1 sophisticated:1 higher:10 follow:1 wei:1 april:1 nonparametrics:1 just:1 roger:1 crp:5 ergodicity:1 correlation:2 autoencoders:1 hand:2 qf3:1 clock:1 web:1 dgt:1 rodriguez:1 defines:3 logistic:1 lda:1 building:1 name:1 concept:8 true:1 contain:1 hence:1 assigned:1 symmetric:1 jbt:1 adjacent:1 indistinguishable:1 during:1 inferior:1 gg:1 plate:1 crf:2 performs:2 motion:6 hallmark:1 variational:5 image:19 novel:16 recently:2 discovers:1 common:2 functional:1 multinomial:11 volume:3 jl:3 belong:1 interpretation:1 million:2 association:2 refer:2 significant:2 cambridge:1 versa:2 gibbs:6 enter:1 cup:1 honglak:1 tuning:2 images5:1 dbn:1 mathematics:1 rd:1 language:1 crocodile:1 surface:1 gt:6 base:1 posterior:9 own:1 recent:1 zcs:3 beetle:1 perspective:1 belongs:1 compound:9 keyboard:1 n00014:1 binary:6 outperforming:1 onr:2 vt:1 baby:1 joshua:2 additional:2 maximize:2 full:2 multiple:2 infer:3 technical:1 faster:1 long:1 cifar:7 coded:1 prediction:2 scalable:1 basic:7 orchid:1 vision:5 expectation:1 represent:5 h1n:2 chicken:1 background:1 conditionals:2 fine:4 remarkably:1 sudderth:1 w2:2 unlike:1 tend:2 undirected:3 chimpanzee:1 jordan:3 call:1 integer:1 camel:1 symmetrically:1 split:2 enough:1 bengio:1 hb:4 restaurant:5 fit:1 w3:2 architecture:7 pepper:1 cow:2 sunflower:1 speech:1 nine:1 compositional:4 pretraining:1 antonio:1 deep:30 useful:4 dp2:1 tune:2 lawn:1 nonparametric:9 tenenbaum:4 ten:1 svms:3 category:39 h3m:2 younes:1 generate:1 fish:1 per:2 hdps:1 diverse:1 write:2 discrete:1 express:3 key:1 drawn:4 clarity:1 preprocessed:1 ht:1 fraction:1 sum:1 run:2 angle:3 place:5 extends:1 vn:1 shark:1 lake:1 draw:2 cattle:1 capturing:1 entirely:1 layer:32 fl:1 bound:5 ct:1 display:4 followed:3 truck:2 g:3 activity:6 toplevel:2 strength:1 adapted:1 annual:1 caterpillar:1 handful:1 fei:3 alex:1 scene:3 flat:3 speed:1 turtle:1 chair:1 performing:1 graceful:1 format:1 department:1 structured:4 influential:1 alternate:2 palm:1 march:1 conjugate:1 belonging:2 across:1 increasingly:2 character:13 partitioned:2 tw:1 g3:1 rsalakhu:1 making:2 hl:1 restricted:3 equation:1 previously:1 remains:1 turn:1 bus:1 describing:1 know:1 available:1 eight:1 observe:3 hierarchical:24 generic:2 appropriate:2 alternative:3 thomas:2 top:11 dirichlet:11 denotes:1 running:2 porteous:1 chinese:4 approximating:2 society:1 sweep:1 strategy:1 concentration:3 usual:1 gradient:1 detrimental:1 dp:8 separate:2 bengali:1 topic:30 evaluate:2 argue:1 kemp:1 trivial:1 willsky:1 assuming:1 hdp:45 length:2 code:1 modeled:1 index:4 acquire:1 difficult:1 mostly:1 dunson:1 potentially:2 taxonomy:2 spider:1 gk:2 boy:1 zt:4 boltzmann:11 allowing:1 teh:4 observation:4 datasets:5 markov:1 drunk:1 pickup:1 canini:1 defining:4 extended:1 variability:1 precise:1 hinton:6 frame:3 gc:3 arbitrary:2 brenden:1 introduced:1 david:1 bottle:1 h2l:4 connection:1 z1:1 coherent:1 learned:14 nip:1 trans:1 able:2 proceeds:1 h3t:4 perception:1 lion:1 pattern:3 regime:1 including:6 belief:7 suitable:1 natural:4 rely:2 indicator:1 nth:1 representing:2 cockroach:1 improve:1 created:1 carried:1 hm:1 coupled:1 auto:1 prior:29 literature:1 discovery:1 review:1 bee:1 evolve:1 relative:1 tractor:1 fully:1 bear:1 skyscraper:1 h2:12 integrate:1 downloaded:1 degree:2 sufficient:3 principle:1 viewpoint:1 tiny:3 share:3 classifying:2 translation:1 row:1 dinosaur:2 placed:1 last:1 copy:1 transpose:1 supported:1 bias:1 allow:3 face:1 sparse:2 benefit:1 distributed:1 van:1 curve:2 calculated:1 vocabulary:1 stand:2 xn:1 plain:1 contour:1 sensory:5 boundary:1 reside:1 world:1 replicated:2 far:3 welling:1 transaction:1 ranganath:1 approximate:9 countably:1 skunk:1 keep:2 abstracted:1 global:4 active:1 instantiation:1 otter:1 assumed:1 fergus:3 latent:2 pen:1 quantifies:1 table:9 learn:15 transfer:5 reasonably:1 composing:1 forest:1 h2n:5 domain:7 louradour:1 inherit:1 main:1 complementary:2 body:1 xu:1 crafted:2 fig:12 roc:2 fashion:3 grosse:1 poppy:1 lizard:1 lie:2 house:1 perceptual:2 tied:1 breaking:3 learns:3 down:2 specific:14 sift:1 h1j:2 experimented:1 svm:4 auroc:7 gzb:1 intractable:1 incorporating:1 mnist:1 workshop:1 adding:1 illumination:1 conditioned:5 television:1 boureau:1 nk:3 overhypotheses:1 easier:1 suited:1 entropy:1 raccoon:1 visual:3 josh:1 contained:3 ordered:1 nserc:1 nested:6 wolf:1 corresponds:1 tieleman:1 acm:2 conditional:12 viewed:3 goal:1 sorted:3 towards:1 shared:2 man:1 tiger:1 specifically:4 typical:5 infinite:1 telephone:1 sampler:5 semantically:1 reducing:1 zb:4 worm:1 engaged:1 experimental:3 xin:15 meaningful:1 support:2 dept:1 mcmc:2 |
3,838 | 4,475 | Stochastic convex optimization with bandit
feedback
Alekh Agarwal
Department of EECS
UC Berkeley
[email protected]
Dean P. Foster
Department of Statistics
University of Pennysylvania
[email protected]
Sham M. Kakade
Department of Statistics Microsoft Research
University of Pennysylvania
New England
[email protected]
Daniel Hsu
Microsoft Research
New England
[email protected]
Alexander Rakhlin
Department of Statistics
University of Pennysylvania
[email protected]
Abstract
This paper addresses the problem of minimizing a convex, Lipschitz function f over a convex, compact set X under a stochastic bandit feedback
model. In this model, the algorithm is allowed to observe noisy realizations
of the function value f (x) at any query point x ? X . We demonstrate
?
e
a generalization of the ellipsoid algorithm that ?
incurs O(poly(d)
T ) regret. Since any algorithm has regret at least ?( T ) on this problem, our
algorithm is optimal in terms of the scaling with T .
1
Introduction
This paper considers the problem of stochastic convex optimization under bandit feedback
which is a generalization of the classical multi-armed bandit problem, formulated by Robbins
in 1952. Our problem is specified by a mean cost function f which is assumed to be convex
and Lipschitz, and a convex, compact domain X . The algorithm repeatedly queries f at
points x ? X and observes noisy realizations of f (x). Performance of an algorithm is
measured by regret, that is the difference between values of f at the query points and the
minimum value of f over X . This specializes to the classical K-armed setting when X is
the probability simplex and f is linear. Several recent works consider the continuum-armed
bandit problem, making different assumptions on the structure of f over X . For instance,
the f is assumed to be linear in the paper [9], a Lipschitz condition on f is assumed in
the works [3, 12, 13], and Srinivas et al. [16] exploit the structure of a Gaussian processes.
For these ?non-parametric? bandit problems, the rates of regret (after T queries) are of the
form T ? , with exponent ? approaching 1 for large dimension d.
The question addressed in the present paper is: How can we leverage convexity of the mean
cost function as a structural assumption? Our
? main contribution is an algorithm which
?
achieves, with high probability, an O(poly(d)
T ) regret after T requests. This result holds
for all convex Lipschitz mean cost functions. We observe that the rate with respect to T
does not deteriorate ?
with d unlike the non-parametric problems mentioned earlier. Let us
also remark that ?( dT ) lower bounds have been shown for linear mean cost functions,
making our algorithm optimal up to factors polynomial in d and logarithmic in T .
?
Prior Work Asymptotic rates of T have been previously achieved by Cope [8] for unimodal functions under stringent conditions (smoothness and strong convexity of the mean
1
cost function, in addition
to the maxima being achieved inside the set). Auer et al. [4]
?
?
show a regret of O( T ) for a one-dimensional non-convex problem with finite number of
maximizers. Yu and Mannor [17] recently studied unimodal bandits ?
in one dimension, but
they do not consider higher dimensional cases. Bubeck et al. [7] show T regret for a subset
of Lipschitz functions with certain metric properties. Convex, Lipschitz cost functions have
also been looked at in the adversarial model [10, 12], but the best-known regret bounds for
these algorithms are O(T 3/4 ). We also note that previous results of Agarwal et al. [1] and
Nesterov [15] do not apply to our setting as noted in the full-length version of this paper [2].
The problem addressed in this paper is closely related to noisy zeroth order convex optimization, whereby the algorithm queries a point of the domain X and receives a noisy evaluation
of the function. While the literature on stochastic optimization is vast, we emphasize that
an optimization guarantee does not necessarily imply a bound on regret. In particular,
we directly build on an optimization method that has been developed by Nemirovski and
Yudin [14, Chapter 9]. Given > 0, the method is guaranteed to produce an -minimizer
?2
e
in O(poly(d)
) iterations, yet this does not immediately imply small regret. The latter is
the quantity of interest in this paper, since it is the standard performance measure in decision theory. More importantly, in many applications every query to the function involves
a consumption of resources or a monetary cost. A low regret guarantees that the net cost
over the entire process is bounded unlike an optimization error bound.
The remainder of this paper is organized as follows. In the next section, we give the formal
problem setup and highlight differences between the regret and optimization error settings.
We then present a simple algorithm and its analysis for 1-dimension that illustrates some of
the key insights behind the general d-dimensional algorithm in Section 3. Section 4 describes
our generalization of the ellipsoid algorithm for d dimensions along with its regret guarantee.
Proofs of our results can be found in the full-length version [2].
2
Setup
In this section we will give the basic setup and the performance criterion, and explain the
differences between the metrics of regret and optimization error.
2.1
Problem definition and notation
Let X be a compact and convex subset of Rd , and let f : X ? R be a 1-Lipschitz convex
function on X , so f (x) ? f (x0 ) ? kx ? x0 k for all x, x0 ? X . We assume X is specified in
a way so that the algorithm can efficiently construct the smallest Euclidian ball containing
X . Furthermore, we assume the algorithm has noisy black-box access to f . Specifically, the
algorithm is allowed to query the value of f at any x ? X , and it observes y = f (x)+? where ?
is an independent ?-subgaussian random variable with mean zero: E[exp(??)] ? exp(?2 ? 2 /2)
for all ? ? R. The goal of the algorithm is to minimize its regret: after making T queries
x1 , . . . , xT ? X , the regret of the algorithm compared to any x? ? arg minx?X f (x) is
PT
RT = t=1 f (xt ) ? f (x? ) .
(1)
We will construct an average and confidence interval (henceforth CI) for the function values
at points queried by the algorithm. Letting LB?i (x) and UB?i (x) denote the lower and
upper bounds of a CI of width ?i for the function estimate of a point x, we will say that
CI?s at two points are ?-separated if LB?i (x) ? UB?i (y) + ? or LB?i (y) ? UB?i (x) + ?.
2.2
Regret vs. optimization error
PT
Since f is convex, the average x
?T = T1 t=1 xt satisfies f (?
xT ) ? f (x? ) ? RT /T so that low
regret (1) also gives a small optimization error. The converse, however, is not necessarily
true. An optimization method might can query far from the minimum of the function (that
is, explore) on most rounds, and output the solution at the last step. Guaranteeing a small
regret typically involves a more careful balancing of exploration and exploitation.
2
To better understand the difference, suppose X = [0, 1], and let f (x) be one of
xT ?1/3 , ?xT ?1/3 and x(x ? 1). Let us sample function values at x = 1/4 and x = 3/4.
To distinguish the first two cases, we need ?(T 2/3 ) points. If f is linear indeed, we only
incur O(T 1/3 ) regret on these rounds. However, if instead f (x) = x(x ? 1), we incur an
undesirable ?(T 2/3 ) regret. For purposes of optimization, it suffices to eventually distinguish the three cases. For the purposes of regret minimization, however, an algorithm has
to detect that the function curves between the two sampled points. To address this issue,
we additionally sample at x = 1/2. The center point acts as a sentinel : if it is recognized
that f (1/2) is noticeably below the other two values, the region [0, 1/4] or [3/4, 1] can be
discarded. Similarly, one of these regions can be discarded if it is recognized that the value
of f either at x = 1/4 or at x = 3/4 is greater than others. Finally, if f at all three points
appears to be similar at a given scale, we have a certificate (due to convexity) that the
algorithm is not paying regret per query larger than this scale.
This center-point device that allows to quickly detect that the optimization method might
be paying high regret and to act on this information is the main novel tool of our paper.
Unlike discretization-based methods, the proposed algorithm uses convexity in a crucial way.
We first demonstrate the device on one-dimensional problems in the next section, where the
solution is clean and intuitive. We then develop a version of the algorithm for higher
dimensions, basing our construction on the beautiful zeroth order optimization method of
Nemirovski and Yudin [14]. Their method does not guarantee vanishing regret by itself, and
a careful fusion of this algorithm with our center-point device is required.
3
One-dimensional case
We start with a special case of 1-dimension to illustrate some of the key ideas including
the center-point device. We assume wlog that the domain X = [0, 1], and f (x) ? [0, 1] (the
latter can be achieved by pinning f (x? ) = 0 since f is 1-Lipschitz).
3.1
Algorithm description
Algorithm 1 One-dimensional stochastic convex bandit algorithm
input noisy black-box access to f : [0, 1] ? R, total number of queries allowed T .
1: Let l1 := 0 and r1 := 1.
2: for epoch ? = 1, 2, . . . do
3:
Let w? := r? ? l? .
4:
Let xl := l? + w? /4, xc := l? + w? /2, and xr := l? + 3w? /4.
5:
for round i = 1, 2, . . . do
6:
Let ?i := 2?i .
log T times.
7:
For each x ? {xl , xc , xr }, query f (x) 2?
?i2
8:
if max{LB?i (xl ), LB?i (xr )} ? min{UB?i (xl ), UB?i (xr )} + ?i then
9:
{Case 1: CI?s at xl and xr are ?i separated}
10:
if LB?i (xl ) ? LB?i (xr ) then let l? +1 := xl and r? +1 := r? .
11:
if LB?i (xl ) < LB?i (xr ) then let l? +1 := l? and r? +1 := xr .
12:
Continue to epoch ? + 1.
13:
else if max{LB?i (xl ), LB?i (xr )} ? UB?i (xc ) + ?i then
14:
{Case 2: CI?s at xc and xl or xr are ?i separated}
15:
if LB?i (xl ) ? LB?i (xr ) then let l? +1 := xl and r? +1 := r? .
16:
if LB?i (xl ) < LB?i (xr ) then let l? +1 := l? and r? +1 := xr .
17:
Continue to epoch ? + 1.
18:
end if
19:
end for
20: end for
Algorithm 1 proceeds in a series of epochs demarcated by a working feasible region (the
interval X? = [l? , r? ] in epoch ? ). In each epoch, the algorithm aims to discard a portion
of X? determined to only contain suboptimal points. To do this, the algorithm repeatedly
3
makes noisy queries to f at three different points in X? . Each epoch is further subdivided
into rounds, where we query the function (2? log T )/?i2 times in round i at each of the
points. By Hoeffding?s inequality, this implies that we know the function value to within
?i with high probability. The value ?i is halved at every round. At the end of an epoch ? ,
X? is reduced to a subset X? +1 = [l? +1 , r? +1 ] ? [l? , r? ] of the current region for the next
epoch ? + 1, and this reduction is such that the new region is smaller in size by a constant
fraction. This geometric rate of reduction guarantees that only a small number of epochs
can occur before X? only contains near-optimal points.
For the algorithm to identify a sizable portion of X? to discard, the queries in each epoch
should be suitably chosen, and the convexity of f must be exploited. To this end, the
algorithm makes its queries at three equally-spaced points xl < xc < xr in X? (see Section
4.1 of the full-length version for graphical illustrations of these cases).
Case 1: If the CIs around f (xl ) and f (xr ) are sufficiently separated, the algorithm discards
a fourth of [l? , r? ] (to the left of xl or right of xr ) which does not contain x? .
Case 2: If the above separation fails, the algorithm checks if the CI around f (xc ) is
sufficiently below at least one of the other CIs (for f (xl ) or f (xr )). If that happens, the
algorithm again discards a quartile of [l? , r? ] that does not contain x? .
Case 3: Finally, if none of the earlier cases is true, then the algorithm is assured (by
convexity) that the function is sufficiently flat on X? and hence it has not incurred much
regret so far . The algorithm continues the epoch, with an increased number of queries to
obtain smaller confidence intervals at each of the three points.
3.2
Analysis
The analysis of Algorithm 1 relies on the function values being contained in the confidence intervals we construct at each round of each epoch. To avoid having probabilities
throughout our analysis, we define an event E where at each epoch ? , and each round i,
f (x) ? [LB?i (x), UB?i (x)] for x ? {xl , xc , xr }. We will carry out the remainder of the
analysis conditioned on E and bound the probability of E c at the end.
The following theorem bounds the regret incurred by Algorithm 1. We note that the regret
would be maintained in terms of the points xt queried by the algorithm at time t. Within
any given round, the order of queries is immaterial to the regret.
Theorem 1 (Regret bound for Algorithm 1). Suppose Algorithm 1 is run on a convex,
1-Lipschitz function f bounded in [0,1]. Suppose the noise in observations is i.i.d. and
?-subGaussian. Then with probability at least 1 ? 1/T we have
T
X
t=1
?
f (xt ) ? f (x ) ? 108
p
?T log T log4/3
T
8? log T
.
Remarks: As stated Algorithm 1 and Theorem 1 assume knowledge of T , but we can?make
the algorithm adaptive to T by a standard doubling argument. We remark that O( T ) is
the smallest possible regret for any algorithm even with noisy gradient information. Hence,
this result shows that for purposes of regret, noisy zeroth order information is no worse than
noisy first-order information apart from logarithmic factors.
Theorem 1 is proved via a series of lemmas below. The key idea is to show that the regret
on any epoch is small and the total number of epochs is bounded. To bound the per-epoch
regret, we will show that the total number of queries made on any epoch depends on how
flat the function is on X? . We either take a long time, but the function is very flat, or we
stop early when the function has sufficient slope, never accruing too much regret. We start
by showing that the reduction in X? after each epoch always preserves near-optimal points.
Lemma 1 (Survival of approx. minima). If epoch ? ends in round i, then [l? +1 , r? +1 ]
contains every x ? [l? , r? ] such that f (x) ? f (x? ) + ?i . In particular, x? ? [l? , r? ] for all ? .
4
The next two lemmas bound the regret incurred in any single epoch. To show this, we first
establish that an algorithm incurs low regret in a round as long as it does not end an epoch.
Then, as a consequence of the doubling trick, we show that the regret incurred in an epoch
is on the same order as that incurred in the last round of the epoch.
Lemma 2 (Certificate of low regret). If epoch ? continues from round i to round i + 1, then
the regret incurred in round i is at most 72?i?1 ? log T.
Lemma 3 (Regret in an epoch). If epoch ? ends in round i, then the regret incurred in the
entire epoch is at most 216?i?1 ? log T.
To obtain a bound on the overall regret, we bound the number of epochs that can occur
before X? only contains near-optimal points. The final regret bound is simply the product
of the number of epochs and the regret incurred in any single epoch.
Lemma 4 (Bound on the number of epochs).
The total number of epochs ? performed by
Algorithm 1 is bounded as ? ?
4
1
2
log4/3
T
8? log T
.
Algorithm for optimization in higher dimensions
We now move to present the general algorithm that works in d-dimensions. The natural
approach would be to try and generalize Algorithm 1 to work in multiple dimensions. However, the obvious extension requires querying the function along every direction in a covering
of the unit sphere so that we know the behavior of the function along every direction. Such
an approach yields regret and running time that scales exponentially with the dimension d.
Nemirovski and Yudin [14] address this problem in the setup of zeroth order optimization
by a clever construction to capture all the directions in polynomially many queries. We
define a pyramid to be a d-dimensional polyhedron defined by d + 1 points; d points form
a d-dimensional regular polygon that is the base of the pyramid, and the apex lies above
the hyperplane containing the base (see Figure 1 for a graphic illustration in 3 dimensions).
The idea of Nemirovski and Yudin is to build a sequence of pyramids, each capturing the
variation of function in certain directions, in such a way that with O(d log d) pyramids we
can explore all the directions. However, as mentioned earlier, their approach fails to give a
low regret. We combine their geometric construction with ideas from the one-dimensional
case to obtain Algorithm 2 which incurs a bounded regret.
r?
APEX
x0
?
xd+1
h
Figure 1: Pyramid in 3-dimensions
R?
x1
X?
x2
Figure 2: The regular simplex constructed at round
i of epoch ? with radius r? , center x0 and vertices
x1 , . . . , xd+1 .
Just like the 1-dimensional case, Algorithm 2 proceeds in epochs. We start with the optimization domain X , and at the beginning we set X0 = X . At the beginning of epoch ? ,
we have a current feasible set X? which contains at least one approximate optimum of the
convex function. The epoch ends with discarding some portion of the set X? in such a way
that we still retain at least one approximate optimum in the remaining set X? +1 .
At the start of the epoch ? , we apply an affine transformation to X? so that the smallest
volume ellipsoid containing it is a Euclidian ball of radius R? (denoted as B(R? )). We define
r? = R? /c1 d for a constant c1 ? 1, so that B(r? ) ? X? (see e.g. Lecture 1, p. 2 [5]). We
will use the notation B? to refer to the enclosing ball. Within each epoch, the algorithm
proceeds in several rounds, each round maintaining a value ?i which is successively halved.
5
Algorithm 2 Stochastic convex bandit algorithm
input feasible region X ? Rd ; noisy black-box access to f : X ? R, constants c1 and c2 , functions
?? (?), ?? (?) and number of queries T allowed.
1: Let X1 := X .
2: for epoch ? = 1, 2, . . . do
3:
Round X? so B(r? ) ? X? ? B(R? ), R? is minimized, and r? := R? /(c1 d). Let B? = B(R? ).
4:
Construct regular simplex with vertices x1 , . . . , xd+1 on the surface of B(r? ).
5:
for round i = 1, 2, . . . do
6:
Let ?i := 2?i .
T
7:
Query f at xj for each j = 1, . . . , d + 1 2? ?log
times.
2
i
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
Let y1 := arg maxj LB?i (xj ).
for k = 1, 2, . . . do
Construct pyramid ?k with apex yk ; let z1 , . . . , zd be the vertices of the base of ?k and
z0 be the center of ?k .
Let ?
b := 2?1 .
loop
T
times.
Query f at each of {yk , z0 , z1 , . . . , zd } 2? ?blog
2
Let center := z0 , apex := yk , top be the vertex v of ?k maximizing LB?b (v),
bottom be the vertex v of ?k minimizing LB?b (v).
if LB?b (top) ? UB?b (bottom) + ?? (b
? ) and LB?b (top) ? UB?b (apex) + ?
b then
{Case (1a)}
Let yk+1 := top, and immediately continue to pyramid k + 1.
else if LB?b (top) ? UB?b (bottom) + ?? (b
? ) and LB?b (top) < UB?b (apex) + ?
b then
{Case (1b)}
0
Set (X? +1 , B? +1 ) = Cone-cutting(?k , X? , B? ), and proceed to epoch ? + 1.
else if LB?b (top) < UB?b (bottom) + ?? (b
? ) and UB?b (center) ? LB?b (bottom) ?
?? (b
? ) then
{Case (2a)}
Let ?
b := ?
b/2.
if ?
b < ?i then start next round i + 1.
else if LB?b (top) < UB?b (bottom) + ?? (b
? ) and UB?b (center) < LB?b (bottom) ?
?? (b
? ) then
{Case (2b)}
0
Set (X? +1 , B? +1 )= Hat-raising(?k , X? , B? ), and proceed to epoch ? + 1.
end if
end loop
end for
end for
end for
Algorithm 3 Cone-cutting
input pyramid ? with apex y, (rounded) feasible region X? for epoch ? , enclosing ball B?
1: Let z1 , . . . , zd be the vertices of the base of ?, and ?
? the angle at its apex.
2: Define the cone
K? = {x | ?? > 0, ?1 , . . . , ?d > 0,
d
X
?i = 1 : x = y ? ?
i=1
d
X
?i (zi ? y)}
i=1
0
3: Set B? +1 to be the min. volume ellipsoid containing B? \ K? .
0
4: Set X? +1 = X? ? B? +1 .
0
output new feasible region X? +1 and enclosing ellipsoid B? +1 .
Algorithm 4 Hat-raising
input pyramid ? with apex y, (rounded) feasible region X? for epoch ? , enclosing ball B? .
1: Let center be the center of ?.
2: Set y 0 = y + (y ? center).
0
3: Set ? to be the pyramid with apex y 0 and same base as ?.
0
0
4: Set X? +1 , B? +1 = Cone-cutting(? , X? , B? ).
0
output new feasible region X? +1 and enclosing ellipsoid B? +1 .
6
y1
y1
y1
?
z1
z2
x0
y2
x0
y2
x0
y3
Figure 3: Sequence of pyramids constructed by Algorithm 2
Let x0 be the center of the ball B(R? ) containing X? . At the start of a round i, we construct
a regular simplex centered at x0 and contained in B(r? ). The algorithm queries the function
f at all the vertices of the simplex, denoted by x1 . . . . , xd+1 , until the CI?s at each vertex
shrink to ?i . The algorithm picks the point y1 that maximizes LB?i (xi ). By construction,
f (y1 ) ? f (xj ) ? ?i for all j = 1, . . . , d + 1. This step is depicted in Figure 2.
The algorithm now successively constructs a sequence of pyramids, with the goal of identifying a region of the feasible set X? such that at least one approximate optimum of f
lies outside the selected region. This region will be discarded at the end of the epoch.
The construction of the pyramids follows the construction from Section 9.2.2 of Nemirovski
and Yudin [14]. The pyramids we construct will have an angle 2? at the apex, where
cos ? = c2 /d. The base of the pyramid consists of vertices z1 , . . . , zd such that zi ? x0 and
y1 ? zi are orthogonal. We note that the construction of such a pyramid is always possible?
we take a sphere with y1 ? x0 as the diameter, and arrange z1 , . . . , zd on the boundary of
the sphere such that the angle between y1 ? x0 and y1 ? zi is ?. The construction of the
pyramid is depicted in Figure 3. Given this pyramid, we set ?
b = 1, and sample the function
at y1 and z1 , . . . , zd as well as the center of the pyramid until the CI?s all shrink to ?
b. Let
top and bottom denote the vertices of the pyramid (including y1 ) with the largest and the
smallest function value estimates resp. For consistency, we will also use apex to denote the
apex y1 . We then check for one of the following conditions (see Section 5 of the full-length
version [2] for graphical illustrations of these cases):
(1) If LB?b (top) ? UB?b (bottom) + ?? (b
? ), we proceed based on the separation between
top and apex CI?s.
(a) If LB?b (top) ? UB?b (apex) + ?
b, then we know that with high probability
f (top) ? f (apex) + ?
b ? f (apex) + ?i .
(2)
In this case, we set top to be the apex of the next pyramid, reset ?
b = 1 and
continue the sampling procedure on the next pyramid.
(b) If LB?b (top) ? UB?b (apex)+b
? , then we know that LB?b (apex) ? UB?b (bottom)+
?? (b
? ) ? 2b
? . In this case, we declare the epoch over and pass the current apex to
the cone-cutting step.
(2) If LB?b (top) ? UB?b (bottom) + ?? (b
? ), then one of the following happens:
(a) If UB?b (center) ? LB?b (bottom) ? ?? (b
? ), then all of the vertices and the center
of the pyramid have their function values within a 2?? (b
? ) + 3b
? interval. In this
case, we set ?
b=?
b/2. If this sets ?
b < ?i , we start the next round with ?i+1 = ?i /2.
Otherwise, we continue sampling the current pyramid with the new value of ?
b.
(b) If UB?b (center) ? LB?b (bottom) ? ?? (b
? ), then we terminate the epoch and pass
the center and the current apex to the hat-raising step.
Hat-Raising: This step happens when the algorithm enters case 2(b). In this case, we
0
0
will show that if we move the apex of the pyramid a little from yi to yi , then yi ?s CI is above
0
the top CI while the angle of the new pyramid at yi is not much smaller than ?. Letting
0
centeri denote the center of the pyramid, we set yi = yi + (yi ? centeri ) and denote the
0
angle at the apex yi by 2?.
? Figure 4 shows the transformation involved in this step.
7
B ? ? +1
K?
?
yi
?? yi
B?
?
z1
z2
Figure 4: Transformation of the
pyramid ? in the hat-raising step.
Figure 5: Cone-cutting step at epoch ? . Solid circle is
the enclosing ball B? . Shaded region is the intersection
of K? with B? . The dotted ellipsoid is the new enclosing
0
ellipsoid B? +1 .
Cone-cutting: This step concludes an epoch. The algorithm gets here either through
case 1(b) or through the hat-raising step. In either case, we have a pyramid with an apex y,
base z1 , . . . , zd and an angle 2?? at the apex, where cos(?)
? ? 2c2 /d. We now define a cone
K? = {x | ?? > 0, ?1 , . . . , ?d > 0,
d
X
i=1
?i = 1 : x = y ? ?
d
X
i=1
?i (zi ? y)}
(3)
which is centered at y and a reflection of the pyramid around the apex. By construction, the
0
cone K? has an angle 2?? at its apex. We set B? +1 to be the ellipsoid of minimum volume
0
containing B? \ K? and define X? +1 = X? ? B? +1 . This is illustrated in Figure 5. Finally,
we put things back into an isotropic position and B? +1 is the ball containing X? +1 is in the
0
isotropic coordinates, which is just obtained by applying an affine transformation to B? +1 .
Let us end with a brief discussion regarding the computational aspects of this algorithm.
Clearly, the most computationally intensive steps of this algorithm are cone-cutting and the
isotropic transformation at the end. However, these are exactly analogous to the classical
0
ellipsoid method. In particular, the equation for B? +1 is known in closed form [11]. Furthermore, the affine transformations needed to the reshape the set can be computed via
rank-one matrix updates and hence computation of inverses can be done efficiently as well
(see e.g. [11] for the relevant implementation details of the ellipsoid method).
The following theorem states our regret guarantee on the performance of Algorithm 2.
Theorem 2. Suppose Algorithm 2 is run with c1 ? 64, c2 ? 1/32 and parameters
6c1 d4
6c1 d4
?? (?) =
+ 3 ? and ?? (?) =
+ 5 ?.
c22
c22
Then with probability at least 1 ? 1/T , the regret incurred by the algorithm is bounded by
2
7
?
?
2d log d
4d c1
d(d + 1)
12c1 d4
2
3
? 16 T ).
768d ? T log T
+1
+
+ 11 = O(d
2
3
2
c2
c2
c2
c2
Remarks: Theorem 2 is again optimal in the dependence on T . The large dependence on
d is also seen in Nemirovski and Yudin [14] who obtain a d7 scaling in noiseless case and
leave it an unspecified polynomial in the noisy case. Using random walk ideas [6] to improve
the dependence on d is an interesting question for future research.
Acknowledgments
Part of this work was done while AA and DH were at the University of Pennsylvania. AA
was partially supported by MSR and Google PhD fellowships and NSF grant CCF-1115788
while this work was done. DH was partially supported under grants AFOSR FA9550-09-10425, NSF IIS-1016061, and NSF IIS-713540. AR gratefully acknowledges the support of
NSF under grant CAREER DMS-0954737.
8
References
[1] A. Agarwal, O. Dekel, and L. Xiao. Optimal algorithms for online convex optimization
with multi-point bandit feedback. In COLT, 2010.
[2] A. Agarwal, D. Foster, D. Hsu, S. Kakade, and A. Rakhlin. Stochastic convex optimization with bandit feedback. URL http://arxiv.org/abs/1107.1744, 2011.
[3] R. Agrawal. The continuum-armed bandit problem. SIAM journal on control and
optimization, 33:1926, 1995.
[4] P. Auer, R. Ortner, and C. Szepesv?ari. Improved rates for the stochastic continuumarmed bandit problem. Learning Theory, pages 454?468, 2007.
[5] K. Ball. An elementary introduction to modern convex geometry. In Flavors of Geometry, number 31 in Publications of the Mathematical Sciences Research Institute,
pages 1?55. 1997.
[6] D. Bertsimas and S. Vempala. Solving convex programs by random walks. Journal of
the ACM, 51(4):540?556, 2004.
[7] S. Bubeck, R. Munos, G. Stolz, and C. Szepesv?ari. X -armed bandits. Journal of
Machine Learning Research, 12:1655?1695, 2011.
[8] E.W. Cope. Regret and convergence bounds for a class of continuum-armed bandit
problems. Automatic Control, IEEE Transactions on, 54(6):1243?1253, 2009.
[9] V. Dani, T.P. Hayes, and S.M. Kakade. Stochastic linear optimization under bandit
feedback. In Proceedings of the 21st Annual Conference on Learning Theory (COLT),
2008.
[10] A. D. Flaxman, A. T. Kalai, and B. H. Mcmahan. Online convex optimization in the
bandit setting: gradient descent without a gradient. In Proceedings of the sixteenth
annual ACM-SIAM symposium on Discrete algorithms, pages 385?394, 2005.
[11] Donald Goldfarb and Michael J. Todd. Modifications and implementation of the ellipsoid algorithm for linear programming. Mathematical Programming, 23:1?19, 1982.
[12] R. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. Advances
in Neural Information Processing Systems, 18, 2005.
[13] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In
Proceedings of the 40th annual ACM symposium on Theory of computing, pages 681?
690. ACM, 2008.
[14] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, New York, 1983.
[15] Y. Nesterov. Random gradient-free minimization of convex functions. Technical Report
2011/1, CORE DP, 2011.
[16] N. Srinivas, A. Krause, S.M. Kakade, and M. Seeger. Gaussian process optimization in
the bandit setting: No regret and experimental design. Arxiv preprint arXiv:0912.3995,
2009.
[17] J. Y. Yu and S. Mannor. Unimodal bandits. In ICML, 2011.
9
| 4475 |@word msr:1 exploitation:1 version:5 polynomial:2 suitably:1 dekel:1 pick:1 incurs:3 euclidian:2 solid:1 carry:1 reduction:3 series:2 contains:4 daniel:1 demarcated:1 current:5 com:3 discretization:1 z2:2 gmail:1 yet:1 must:1 update:1 v:1 selected:1 device:4 isotropic:3 beginning:2 vanishing:1 core:1 fa9550:1 certificate:2 mannor:2 org:1 c22:2 mathematical:2 along:3 constructed:2 c2:8 symposium:2 consists:1 combine:1 inside:1 x0:14 deteriorate:1 upenn:1 indeed:1 behavior:1 multi:3 little:1 armed:8 bounded:6 notation:2 maximizes:1 unspecified:1 developed:1 transformation:6 guarantee:6 berkeley:2 every:5 y3:1 act:2 xd:4 exactly:1 control:2 unit:1 converse:1 grant:3 t1:1 before:2 declare:1 todd:1 accruing:1 consequence:1 black:3 zeroth:4 might:2 studied:1 shaded:1 co:2 nemirovski:7 acknowledgment:1 regret:52 xr:18 procedure:1 confidence:3 regular:4 donald:1 get:1 undesirable:1 clever:1 put:1 applying:1 dean:2 center:19 maximizing:1 convex:23 identifying:1 immediately:2 insight:1 importantly:1 variation:1 coordinate:1 analogous:1 resp:1 pt:2 suppose:4 construction:9 programming:2 us:1 trick:1 continues:2 bottom:13 preprint:1 enters:1 capture:1 region:14 observes:2 yk:4 mentioned:2 convexity:6 complexity:1 nesterov:2 immaterial:1 solving:1 tight:1 incur:2 efficiency:1 chapter:1 polygon:1 separated:4 query:24 outside:1 larger:1 say:1 otherwise:1 statistic:3 noisy:12 itself:1 final:1 online:2 sequence:3 agrawal:1 net:1 skakade:1 product:1 remainder:2 reset:1 relevant:1 loop:2 realization:2 monetary:1 sixteenth:1 intuitive:1 description:1 convergence:1 optimum:3 r1:1 produce:1 guaranteeing:1 leave:1 illustrate:1 develop:1 measured:1 paying:2 sizable:1 strong:1 c:1 involves:2 implies:1 direction:5 radius:2 closely:1 stochastic:9 quartile:1 exploration:1 centered:2 stringent:1 noticeably:1 subdivided:1 suffices:1 generalization:3 elementary:1 extension:1 hold:1 around:3 sufficiently:3 exp:2 continuum:4 achieves:1 smallest:4 early:1 arrange:1 purpose:3 robbins:1 largest:1 basing:1 tool:1 minimization:2 dani:1 clearly:1 gaussian:2 always:2 aim:1 kalai:1 avoid:1 publication:1 polyhedron:1 check:2 rank:1 seeger:1 adversarial:1 detect:2 entire:2 typically:1 bandit:21 arg:2 issue:1 overall:1 colt:2 denoted:2 exponent:1 special:1 uc:1 wharton:1 construct:8 never:1 having:1 sampling:2 yu:2 icml:1 nearly:1 future:1 simplex:5 others:1 minimized:1 report:1 ortner:1 modern:1 preserve:1 maxj:1 geometry:2 microsoft:4 ab:1 interest:1 evaluation:1 behind:1 orthogonal:1 walk:2 circle:1 instance:1 increased:1 earlier:3 ar:1 cost:8 vertex:11 subset:3 too:1 graphic:1 eec:1 st:1 siam:2 retain:1 rounded:2 michael:1 quickly:1 again:2 successively:2 containing:7 hoeffding:1 henceforth:1 worse:1 depends:1 performed:1 try:1 closed:1 pinning:1 portion:3 start:7 slope:1 contribution:1 minimize:1 who:1 efficiently:2 spaced:1 identify:1 yield:1 generalize:1 none:1 explain:1 definition:1 involved:1 obvious:1 dm:1 proof:1 hsu:2 sampled:1 proved:1 stop:1 knowledge:1 organized:1 auer:2 back:1 appears:1 higher:3 dt:1 improved:1 done:3 box:3 shrink:2 furthermore:2 just:2 until:2 working:1 receives:1 google:1 contain:3 true:2 y2:2 ccf:1 hence:3 goldfarb:1 i2:2 illustrated:1 round:24 width:1 covering:1 noted:1 whereby:1 maintained:1 d4:3 criterion:1 demonstrate:2 l1:1 reflection:1 novel:1 recently:1 ari:2 exponentially:1 volume:3 refer:1 queried:2 smoothness:1 rd:2 approx:1 consistency:1 automatic:1 similarly:1 gratefully:1 apex:28 access:3 alekh:2 surface:1 base:7 halved:2 recent:1 apart:1 discard:4 certain:2 inequality:1 blog:1 continue:5 yi:10 exploited:1 seen:1 minimum:4 greater:1 recognized:2 ii:2 full:4 unimodal:3 sham:1 multiple:1 d7:1 technical:1 england:2 long:2 sphere:3 equally:1 basic:1 noiseless:1 metric:3 arxiv:3 iteration:1 agarwal:4 achieved:3 pyramid:30 c1:9 addition:1 fellowship:1 szepesv:2 krause:1 addressed:2 interval:5 else:4 crucial:1 unlike:3 thing:1 subgaussian:2 structural:1 leverage:1 near:3 xj:3 zi:5 pennsylvania:1 approaching:1 suboptimal:1 idea:5 regarding:1 intensive:1 url:1 proceed:3 york:1 repeatedly:2 remark:4 diameter:1 reduced:1 http:1 nsf:4 dotted:1 per:2 zd:7 discrete:1 key:3 clean:1 vast:1 bertsimas:1 fraction:1 cone:10 run:2 angle:7 inverse:1 fourth:1 throughout:1 separation:2 decision:1 scaling:2 capturing:1 bound:16 guaranteed:1 distinguish:2 annual:3 occur:2 x2:1 flat:3 kleinberg:2 aspect:1 argument:1 min:2 vempala:1 department:4 request:1 ball:9 describes:1 smaller:3 kakade:4 making:3 happens:3 modification:1 computationally:1 resource:1 equation:1 previously:1 eventually:1 needed:1 know:4 letting:2 dahsu:1 end:18 apply:2 observe:2 reshape:1 hat:6 top:17 running:1 remaining:1 graphical:2 maintaining:1 xc:7 exploit:1 build:2 establish:1 classical:3 move:2 question:2 quantity:1 looked:1 parametric:2 rt:2 dependence:3 minx:1 gradient:4 dp:1 consumption:1 considers:1 length:4 ellipsoid:12 illustration:3 minimizing:2 setup:4 stated:1 enclosing:7 implementation:2 design:1 upper:1 observation:1 discarded:3 finite:1 descent:1 y1:13 lb:35 required:1 specified:2 z1:9 slivkins:1 raising:6 address:3 proceeds:3 below:3 program:1 including:2 max:2 event:1 natural:1 beautiful:1 improve:1 brief:1 imply:2 specializes:1 concludes:1 acknowledges:1 flaxman:1 prior:1 literature:1 epoch:49 geometric:2 asymptotic:1 afosr:1 lecture:1 highlight:1 interesting:1 querying:1 upfal:1 incurred:9 affine:3 sufficient:1 xiao:1 foster:3 balancing:1 supported:2 last:2 free:1 formal:1 understand:1 institute:1 munos:1 feedback:6 dimension:12 curve:1 boundary:1 yudin:7 made:1 adaptive:1 far:2 polynomially:1 cope:2 transaction:1 approximate:3 compact:3 emphasize:1 cutting:7 hayes:1 assumed:3 sentinel:1 xi:1 additionally:1 terminate:1 career:1 poly:3 necessarily:2 domain:4 assured:1 main:2 noise:1 allowed:4 x1:6 wiley:1 wlog:1 fails:2 position:1 xl:18 lie:2 mcmahan:1 theorem:7 z0:3 xt:8 discarding:1 showing:1 rakhlin:3 survival:1 maximizers:1 fusion:1 ci:13 phd:1 illustrates:1 conditioned:1 kx:1 flavor:1 depicted:2 logarithmic:2 intersection:1 simply:1 explore:2 bubeck:2 contained:2 partially:2 doubling:2 aa:2 minimizer:1 satisfies:1 relies:1 dh:2 acm:4 goal:2 formulated:1 careful:2 lipschitz:9 feasible:8 specifically:1 determined:1 hyperplane:1 lemma:6 total:4 pas:2 experimental:1 log4:2 support:1 latter:2 alexander:1 ub:22 srinivas:2 |
3,839 | 4,476 | See the Tree Through the Lines:
The Shazoo Algorithm?
Nicol`o Cesa-Bianchi
DSI, University of Milan, Italy
[email protected]
Fabio Vitale
DSI, University of Milan, Italy
[email protected]
Giovanni Zappella
Dept. of Mathematics, Univ. of Milan, Italy
[email protected]
Claudio Gentile
DICOM, University of Insubria, Italy
[email protected]
Abstract
Predicting the nodes of a given graph is a fascinating theoretical problem with applications in several domains. Since graph sparsification via spanning trees retains
enough information while making the task much easier, trees are an important
special case of this problem. Although it is known how to predict the nodes of an
unweighted tree in a nearly optimal way, in the weighted case a fully satisfactory
algorithm is not available yet. We fill this hole and introduce an efficient node
predictor, S HAZOO, which is nearly optimal on any weighted tree. Moreover, we
show that S HAZOO can be viewed as a common nontrivial generalization of both
previous approaches for unweighted trees and weighted lines. Experiments on
real-world datasets confirm that S HAZOO performs well in that it fully exploits
the structure of the input tree, and gets very close to (and sometimes better than)
less scalable energy minimization methods.
1
Introduction
Predictive analysis of networked data is a fast-growing research area whose application domains
include document networks, online social networks, and biological networks. In this work we view
networked data as weighted graphs, and focus on the task of node classification in the transductive
setting, i.e., when the unlabeled graph is available beforehand. Standard transductive classification
methods, such as label propagation [2, 3, 18], work by optimizing a cost or energy function defined
on the graph, which includes the training information as labels assigned to training nodes. Although
these methods perform well in practice, they are often computationally expensive, and have performance guarantees that require statistical assumptions on the selection of the training nodes.
A general approach to sidestep the above computational issues is to sparsify the graph to the largest
possible extent, while retaining much of its spectral properties ?see, e.g., [5, 6, 12, 16]. Inspired
by [5, 6], this paper reduces the problem of node classification from graphs to trees by extracting
suitable spanning trees of the graph, which can be done quickly in many cases. The advantage
of performing this reduction is that node prediction is much easier on trees than on graphs. This
fact has recently led to the design of very scalable algorithms with nearly optimal performance
guarantees in the online transductive model, which comes with no statistical assumptions. Yet, the
current results in node classification on trees are not satisfactory. The T REE O PT strategy of [5] is
optimal to within constant factors, but only on unweighted trees. No equivalent optimality results
are available for general weighted trees. To the best of our knowledge, the only other comparable
result is WTA by [6], which is optimal (within log factors) only on weighted lines. In fact, WTA can
still be applied to weighted trees by exploiting an idea contained in [9]. This is based on linearizing
the tree via a depth-first visit. Since linearization loses most of the structural information of the tree,
?
This work was supported in part by Google Inc. through a Google Research Award, and by the PASCAL2
Network of Excellence under EC grant 216886. This publication only reflects the authors? views.
1
this approach yields suboptimal mistake bounds. This theoretical drawback is also confirmed by
empirical performance: throwing away the tree structure negatively affects the practical behavior of
the algorithm on real-world weighted graphs.
The importance of weighted graphs, as opposed to unweighted ones, is suggested by many practical
scenarios where the nodes carry more information than just labels, e.g., vectors of feature values. A
natural way of leveraging this side information is to set the weight on the edge linking two nodes to
be some function of the similariy between the vectors associated with these nodes. In this work, we
bridge the gap between the weighted and unweighted cases by proposing a new prediction strategy,
called S HAZOO, achieving a mistake bound that depends on the detailed structure of the weighted
tree. We carry out the analysis using a notion of learning bias different from the one used in [6] and
more appropriate for weighted graphs. More precisely, we measure the regularity of the unknown
node labeling via the weighted cutsize induced by the labeling on the tree (see Section 3 for a precise
definition). This replaces the unweighted cutsize that was used in the analysis of WTA. When the
weighted cutsize is used, a cut edge violates this inductive bias in proportion to its weight. This
modified bias does not prevent a fair comparison between the old algorithms and the new one:
S HAZOO specializes to T REE O PT in the unweighted case, and to WTA when the input tree is a
weighted line. By specializing S HAZOO?s analysis to the unweighted case we recover T REE O PT?s
optimal mistake bound. When the input tree is a weighted line, we recover WTA?s mistake bound
expressed through the weighted cutsize instead of the unweighted one. The effectiveness of S HAZOO
on any tree is guaranteed by a corresponding lower bound (see Section 3).
S HAZOO can be viewed as a common nontrivial generalization of both T REE O PT and WTA. Obtaining this generalization while retaining and extending the optimality properties of the two algorithms
is far from being trivial from a conceptual and technical standpoint. Since S HAZOO works in the
online transductive model, it can easily be applied to the more standard train/test (or ?batch?) transductive setting: one simply runs the algorithm on an arbitrary permutation of the training nodes, and
obtains a predictive model for all test nodes. However, the implementation might take advantage
of knowing the set of training nodes beforehand. For this reason, we present two implementations
of S HAZOO, one for the online and one for the batch setting. Both implementations result in fast
algorithms. In particular, the batch one is linear in |V |. This is achieved by a fast algorithm for
weighted cut minimization on trees, a procedure which lies at the heart of S HAZOO.
Finally, we test S HAZOO against WTA, label propagation, and other competitors on real-world
weighted graphs. In almost all cases (as expected), we report improvements over WTA due to the
better sensitivity to the graph structure. In some cases, we see that S HAZOO even outperforms standard label propagation methods. Recall that label propagation has a running time per prediction
which is proportional to |E|, where E is the graph edge set. On the contrary, S HAZOO can typically
be run in constant amortized time per prediction by using Wilson?s algorithm for sampling random
spanning trees [17]. By disregarding edge weights in the initial sampling phase, this algorithm is
able to draw a random (unweighted) spanning tree in time proportional to |V | on most graphs. Our
experiments reveal that using the edge weights only in the subsequent prediction phase causes in
practice only a minor performance degradation.
2
Preliminaries and basic notation
Let T = (V, E, W ) be an undirected and weighted tree with |V | = n nodes, positive edge weights
Wi,j > 0 for (i, j) ? E, and Wi,j = 0 for (i, j) ?
/ E. A binary labeling of T is any assignment
y = (y1 , . . . , yn ) ? {?1, +1}n of binary labels to its nodes. We use (T, y) to denote the resulting
labeled weighted tree. The online learning protocol for predicting (T, y) is defined as follows. The
learner is given T while y is kept hidden. The nodes of T are presented to the learner one by one,
according to an unknown and arbitrary permutation i1 , . . . , in of V . At each time step t = 1, . . . , n
node it is presented and the learner must issue a prediction ybit ? {?1, +1} for the label yit . Then
yit is revealed and the learner knows whether a mistake occurred. The learner?s goal is to minimize
the total number of prediction mistakes.
Following previous works [10, 9, 5, 6], we measure the regularity of a labeling y of T in terms of
?-edges, where a ?-edge for (T, y) is any (i, j) ? E such that yP
i 6= yj . The overall amount of
irregularity in a labeled tree (T, y) is the weighted cutsize ?W = (i,j)?E ? Wi,j , where E ? ? E
is the subset of ?-edges in the tree. We use the weighted cutsize as our learning bias, that is, we
want to design algorithms whose predictive performance scales with ?W . Unlike the ?-edge count
? = |E ? |, which is a good measure of regularity for unweighted graphs, the weighted cutsize takes
2
the edge weight Wi,j into account when measuring the irregularity of a ?-edge (i, j). In the sequel,
when we measure the distance between any pair of nodes
P i and j on the input tree T we always use
the resistance distance metric d, that is, d(i, j) = (r,s)??(i,j) W1r,s , where ?(i, j) is the unique
path connecting i to j.
3
A lower bound for weighted trees
In this section we show that the weighted cutsize can be used as a lower bound on the number of
online mistakes made by any algorithm on any tree. In order to do so (and unlike previous papers
on this specific subject ?see, e.g., [6]), we need to introduce a more refined notion of adversarial
?budget?. Given T = (V, E, W ), let ?(M ) be the maximum
number of edges of T such that o
the
n
P
0
0
sum of their weights does not exceed M , ?(M ) = max |E | : E ? E, (i,j)?E 0 wi,j ? M .
We have the following simple lower bound (all proofs are omitted from this extended abstract).
Theorem 1 For any weighted tree T = (V, E, W ) there exists a randomized label assignment to
V such that any algorithm can be forced to make at least ?(M )/2 online mistakes in expectation,
while ?W ? M .
Specializing [6, Theorem 1] to trees gives the lower bound K/2 under the constraint ? ? K ? |V |.
The main difference between the two bounds is the measure of label regularity being used: Whereas
Theorem 1 uses ?W , which depends on the weights, [6, Theorem 1] uses the weight-independent
quantity ?. This dependence of the lower bound on the edge weights is consistent with our learning
bias, stating that a heavy ?-edge violates the bias more than a light one. Since ? is nondecreasing,
the lower bound implies a number of mistakes of at least ?(?W )/2. Note that ?(?W ) ? ? for any
labeled tree (T, y). Hence, whereas a constraint K on ? implies forcing at least K/2 mistakes, a
constraint M on ?W allows the adversary to force a potentially larger number of mistakes.
In the next section we describe an algorithm whose mistake bound nearly matches the above lower
bound on any weighted tree when using ?(?W ) as the measure of label regularity.
4
The Shazoo algorithm
In this section we introduce the S HAZOO algorithm, and relate it to previously proposed methods
for online prediction on unweighted trees (T REE O PT from [5]) and weighted line graphs (WTA from
[6]). In fact, S HAZOO is optimal on any weighted tree, and reduces to T REE O PT on unweighted trees
and to WTA on weighted line graphs. Since T REE O PT and WTA are optimal on any unweighted tree
and any weighted line graph, respectively, S HAZOO necessarily contains elements of both of these
algorithms.
In order to understand our algorithm, we now define some relevant structures of the input tree T . See
Figure 1 (left) for an example. These structures evolve over time according to the set of observed
labels. First, we call revealed a node whose label has already been observed by the online learner;
otherwise, a node is unrevealed. A fork is any unrevealed node connected to at least three different
revealed nodes by edge-disjoint paths. A hinge node is either a revealed node or a fork. A hinge
tree is any component of the forest obtained by removing from T all edges incident to hinge nodes;
hence any fork or labeled node forms a 1-node hinge tree. When a hinge tree H contains only one
hinge node, a connection node for H is the node contained in H. In all other cases, we call a
connection node for H any node outside H which is adjacent to a node in H. A connection fork is
a connection node which is also a fork. Finally, a hinge line is any path connecting two hinge nodes
such that no internal node is a hinge node.
Given an unrevealed node i and a label value y ? {?1, +1}, the cut function cut(i, y) is the value
of the minimum weighted cutsize of T over all labelings y ? {?1, +1}n consistent with the labels
seen so far and such that yi = y. Define ?(i) = cut(i, ?1) ? cut(i, +1) if i is unrevealed, and
?(i) = yi , otherwise. The algorithm?s pseudocode is given in Algorithm 1. At time t, in order
to predict the label yit of node it , S HAZOO calculates ?(i) for all connection nodes i of H(it ),
where H(it ) is the hinge tree containing it . Then the algorithm predicts yit using the label of the
connection node i of H(it ) which is closest to it and such that ?(i) 6= 0 (recall from Section 2
that all distances/lengths are measured using the resistance metric). Ties are broken arbitrarily. If
?(i) = 0 for all connection nodes i in H(it ) then S HAZOO predicts a default value (?1 in the
3
1
1
2
3
<0
1
4
1
>0
2
2
2
1
1
2
+
3
+
4
+
5
+
6
+
1
1+a
1+2a
1+3a
1+(V-1)a
Figure 1: Left: An input tree. Revealed nodes are dark grey, forks are doubly circled, and hinge
lines have thick black edges. The hinge trees not containing hinge nodes (i.e., the ones that are not
singletons) are enclosed by dotted lines. The dotted arrows point to the connection node(s) of such
hinge trees. Middle: The predictions of S HAZOO on the nodes of a hinge tree. The numbers on the
edges denote edge weights. At a given time t, S HAZOO uses the value of ? on the two hinge nodes
(the doubly circled ones, which are also forks in this case), and is required to issue a prediction on
node it (the black node in this figure). Since it is between a positive ? hinge node and a negative
? hinge node, S HAZOO goes with the one which is closer in resistance distance, hence predicting
ybit = ?1. Right: A simple example where the mincut prediction strategy does not work well in the
weighted case. In this example, mincut mispredicts all labels, yet ? = 1, and the ratio of ?W to the
total weight of all edges is about 1/|V |. The labels to be predicted are presented according to the
numbers on the left of each node. Edge weights are also displayed, where a is a very small constant.
pseudocode). If it is a fork (which is also a hinge node), then H(it ) = {it }. In this case, it is
a connection node of H(it ), and obviously
the one closest to itself. Hence, in this case S HAZOO
predicts yt simply by ybit = sgn ?(it ) . See Figure 1 (middle) for an example. On unweighted
Algorithm 1: S HAZOO
for t = 1 . . . n
Let C H(it) be the set of the connection nodes i of H(it ) for which ?(i) 6= 0
if C H(it ) 6? ?
Let j be the node ofC H(it ) closest to it
Set ybit = sgn ?(j)
else Set ybit = ?1 (default value)
trees, computing ?(i) for a connection node i reduces to the Fork Label Estimation Procedure in
[5, Lemma 13]. On the other hand, predicting with the label of the connection node closest to it
in resistance distance is reminiscent of the nearest-neighbor prediction of WTA on weighted line
graphs [6]. In fact, as in WTA, this enables to take advantage of labelings whose ?-edges are light
weighted. An important limitation of WTA is that this algorithm linearizes the input tree. On the
one hand, this greatly simplifies the analysis of nearest-neighbor prediction; on the other hand, this
prevents exploiting the structure of T , thereby causing logaritmic slacks in the upper bound of WTA.
The T REE O PT algorithm, instead, performs better when the unweighted input tree is very different
from a line graph (more precisely, when the input tree cannot be decomposed into long edge-disjoint
paths, e.g., a star graph). Indeed, T REE O PT?s upper bound does not suffer from logaritmic slacks,
and is tight up to constant factors on any unweighted tree. Similar to T REE O PT, S HAZOO does
not linearize the input tree and extends to the weighted case T REE O PT?s superior performance, also
confirmed by the experimental comparison reported in Section 6.
In Figure 1 (right) we show an example that highlights the importance of using the ? function to
compute the fork labels. Since ? predicts a fork it with the label that minimizes the weighted cutsize
of T consistent with the revealed labels, one may wonder whether computing ? through mincut
based on the number of ?-edges (rather than their weighted sum) could be an effective prediction
strategy. Figure 1 (right) illustrates an example of a simple tree where such a ? mispredicts the
labels of all nodes, when both ?W and ? are small.
Remark 1 We would like to stress that S HAZOO can also be used to predict the nodes of an arbitrary graph by first drawing a random spanning tree T of the graph, and then predicting optimally
on T ?see, e.g., [5, 6]. The resulting mistake bound is simply the expected value of S HAZOO?s
mistake bound over the random draw of T . By using a fast spanning tree sampler [17], the involved
computational overhead amounts to constant amortized time per node prediction on ?most? graphs.
4
Remark 2 In certain real-world input graphs, the presence of an edge linking two nodes may also
carry information about the extent to which the two nodes are dissimilar, rather than similar. This
information can be encoded by the sign of the weight, and the resulting network is called a signed
graph. The regularity measure is naturally extended to signed graphs by counting the weight of
frustrated edges (e.g.,[7]), where (i, j) is frustrated if yi yj 6= sgn(wi,j ). Many of the existing
algorithms for node classification [18, 9, 10, 5, 8, 6] can in principle be run on signed graphs.
However, the computational cost may not always be preserved. For example, mincut [4] is in general
NP-hard when the graph is signed [13]. Since our algorithm sparsifies the graph using trees, it can
be run efficiently even in the signed case. We just need to re-define the ? function as ?(i) =
fcut(i, ?1) ? fcut(i, +1), where fcut is the minimum total weight of frustrated edges consistent
with the labels seen so far. The argument contained in Section 5 for the positive edge weights (see,
e.g., Eq. (1) therein) allows us to show that also this version of ? can be computed efficiently. The
prediction rule has to be re-defined as well: We count the parity of thenumber z of negative-weighted
edges along the path connecting it to the closest node j ? C H(it ) , i.e., ybit = (?1)z sgn ?(j) .
Remark 3 In [5] the authors note that T REE O PT approximates a version space (Halving) algorithm on the set of tree labelings. Interestingly, S HAZOO is also an approximation to a more general
Halving algorithm for weighted trees. This generalized Halving gives a weight to each labeling
consistent with the labels seen so far and with the sign of ?(f ) for each fork f . These weighted
labelings, which depend on the weights of the ?-edges generated by each labeling, are used for computing the predictions. One can show (details omitted due to space limitations) that this generalized
Halving algorithm has a mistake bound within a constant factor of S HAZOO?s.
5
Mistake bound analysis and implementation
We now show that S HAZOO is nearly optimal on every weighted tree T . We obtain an upper bound
in terms of ?W and the structure of T , nearly matching the lower bound of Theorem 1. We now
give some auxiliary notation that is strictly needed for stating the mistake bound.
Given a labeled tree (T, y), a cluster is any maximal subtree whose nodes have the same label. An
in-cluster line graph is anyP
line graph that is entirely contained in a single cluster. Finally, given a
W
= (i,j)?L W1i,j , i.e., the (resistance) distance between its terminal nodes.
line graph L, we set RL
Theorem 2 For any labeled and weighted tree (T, y), there exists a set LT of O ?(?W ) edgedisjoint in-cluster line graphs such that the number of mistakes made by S HAZOO is at most of the
order of
n
X
o
W
min |L|, 1 + log 1 + ?W RL
.
L?LT
The above mistake bound depends on the tree structure through LT . The sum contains O ?(?W )
W
terms, each one being at most logarithmic
in the scale-free products ?W RL
. The bound is governed
W
occurring in the lower bound of Theorem 1. However, Theorem 2
by the same key quantity ? ?
also shows that S HAZOO can take advantage of trees that cannot be covered by long line graphs. For
example, if the input tree T is a weighted
long
line graph, then it is likely to contain
in-cluster lines.
W
Hence, the factor multiplying ? ?W may be of the order of log 1 + ?W RL
. If, instead, T has
constant diameter (e.g., a star graph), then the in-cluster lines can only contain a constant number of
nodes, and the number of mistakes can never exceed O ?(?W ) . This is a log factor improvement
over WTA which, by its very nature, cannot exploit the structure of the tree it operates on.1
As for the implementation, we start by describing a method for calculating cut(v, y) for any unlabeled node v and label value y. Let T v be the maximal subtree of T rooted at v, such that no internal
node is revealed. For any node i of T v , let Tiv be the subtree of T v rooted at i. Let ?vi (y) be the
minimum weighted cutsize of Tiv consistent with the revealed nodes and such that yi = y. Since
1
One might wonder whether an arbitrarily large gap between upper (Theorem 2) and lower (Theorem 1)
W
bounds exists due to the extra factors depending on ?W RL
. One way to get around this is to follow the
analysis of WTA in [6]. Specifically, we can adapt here the more general analysis from that paper (see Lemma
2 therein) that allows us to drop, for any integer K, the resistance contribution of K arbitrary non-? edges of
W
the line graphs in LT (thereby reducing RL
for any L containing any of these edges) at the cost of increasing
the mistake bound by K. The details will be given in the full version of this paper.
5
?(v) = cut(v, ?1) ? cut(v, +1) = ?vv (?1) ? ?vv (+1), our goal is to compute ?vv (y). It is easy to
see by induction that the quantity ?vi (y) can be recursively defined as follows, where Civ is the set
of all children of i in T v , and Yj ? {yj } if yj is revealed, and Yj ? {?1, +1}, otherwise:2
? X
v 0
0
?
min
?
(y
)
+
I
{y
=
6
y}
w
if i is an internal node of T v
i,j
j
y 0 ?Yj
v
?vi (y) =
(1)
? j?Ci
0
otherwise.
Now, ?vv (y) can be computed through a simple depth-first visit of T v . In all backtracking steps of
this visit the algorithm uses (1) to compute ?vi (y) for each node i, the values ?vj (y) for all children
j of i being calculated during the previous backtracking steps. The total running time is therefore
linear in the number of nodes of T v .
Next, we describe the basic implementation of S HAZOO for the on-line setting. A batch learning
implementation will be given at the end of this section. The online implementation is made up of
three steps.
1. Find the hinge nodes of subtree T it . Recall that a hinge-node is either a fork or a revealed
node. Observe that a fork is incident to at least three nodes lying on different hinge lines. Hence, in
this step we perform a depth-first visit of T it , marking each node lying on a hinge line. In order to
accomplish this task, it suffices to single out all forks marking each labeled node and, recursively,
each parent of a marked node of T it . At the end of this process we are able to single out the forks
by counting the number of edges (i, j) of each marked node i such that j has been marked, too. The
remaining hinge nodes are the leaves of T it whose labels have currently been revealed.
2. Compute sgn(?(i)) for all connection forks of H(it ). From the previous step we can easily
find the connection node(s) of H(it ). Then, we simply exploit the above-described technique for
computing the cut function, obtaining sgn(?(i)) for all connection forks i of H(it ).
3. Propagate the labels of the nodes of C(H(it )) (only if it is not a fork). We perform a visit of
H(it ) starting from every node r ? C(H(it )). During these visits, we mark each node j of H(it )
with the label of r computed in the previous step, together with the length of ?(r, j), which is what
we need for predicting any label of H(it ) at the current time step.
The overall running time is dominated
P by the first step and the calculation of ?(i). Hence the worst
case running time is proportional to t?|V | |V (T it )|. This quantity can be quadratic in |V |, though
this is rarely encountered in practice if the node presentation order is not adversarial. For example,
it is easy to show that in a line graph, if the node presentation order is random, then the total time is
of the order of |V | log |V |. For a star graph the total time complexity is always linear in |V |, even
on adversarial orders.
In many real-world scenarios, one is interested in the more standard problem of predicting the labels
of a given subset of test nodes based on the available labels of another subset of training nodes.
Building on the above on-line implementation, we now derive an implementation of S HAZOO for
this train/test (or ?batch learning?) setting. We first show that computing |?ii (+1)| and |?ii (?1)| for
all unlabeled nodes i in T takes O(|V |) time. This allows us to compute sgn(?(v)) for all forks v
in O(|V |) time, and then use the first and the third steps of the on-line implementation. Overall, we
show that predicting all labels in the test set takes O(|V |) time.
Consider tree T i as rooted at i. Given any unlabeled node i, we perform a visit of T i starting at
i. During the backtracking steps of this visit we use (1) to calculate ?ij (y) for each node j in T i
and label y ? {?1, +1}. Observe now that for any pair i, j of adjacent unlabeled nodes and any
label y ? {?1, +1}, once we have obtained ?ii (y), ?ij (+1) and ?ij (?1), we can compute ?ji (y) in
constant time, as ?ji (y) = ?ii (y) ? miny0 ?{?1,+1} ?ij (y 0 ) + I {y 0 6= y} wi,j . In fact, all children
of j in T i are descendants of i, while the children of i in T i (but j) are descendants of j in T j .
S HAZOO computes ?ii (y), we can compute in constant time ?ji (y) for all child nodes j of i in T i ,
and use this value for computing ?jj (y). Generalizing this argument, it is easy to see that in the next
phase we can compute ?kk (y) in constant time for all nodes k of T i such that for all ancestors u of
k and all y ? {?1, +1}, the values of ?uu (y) have previously been computed.
2
The recursive computations contained in this section are reminiscent of the sum-product algorithm [11].
6
The time for computing ?ss (y) for all nodes s of T i and any label y is therefore linear in the time
of performing a breadth-first (or depth-first) visit of T i , i.e., linear in the number of nodes of T i .
Since each labeled node with degree d is part of at most d trees T i for some i, we have that the total
number of nodes of all distinct (edge-disjoint) trees T i across i ? V is linear in |V |.
Finally, we need to propagate the connection node labels of each hinge tree as in the third step of
the online implementation. Since also this last step takes linear time, we conclude that the total time
for predicting all labels is linear in |V |.
6
Experiments
We tested our algorithm on a number of real-world weighted graphs from different domains (character recognition, text categorization, bioinformatics, Web spam detection) against the following
baselines:
Online Majority Vote (OMV). This is an intuitive and fast algorithm for sequentially predicting the
node labels is via a weighted majorityP
vote over the labels of the adjacent nodes seen so far. Namely,
OMV predicts yit through the sign of
s yis wis ,it , where s ranges over s < t such that (is , it ) ? E.
Both the total time and space required by OMV are ?(|E|).
Label Propagation (L AB P ROP). L AB P ROP [18, 2, 3] is a batch transductive learning method computed by solving a system of linear equations which requires total time of the order of |E|?|V |. This
relatively high computational cost should be taken into account when comparing L AB P ROP to faster
online algorithms. Recall that OMV can be viewed as a fast ?online approximation? to L AB P ROP.
Weighted Tree Algorithm (WTA). As explained in the introductory section, WTA can be viewed
as a special case of S HAZOO. When the input graph is not a line, WTA turns it into a line by first
extracting a spanning tree of the graph, and then linearizing it. The implementation described in
[6] runs in constant amortized time per prediction whenever the spanning tree sampler runs in time
?(|V |).
The Graph Perceptron algorithm [10] is another readily available baseline. This algorithm has been
excluded from our comparison because it does not seem to be very competitive in terms of performance (see, e.g., [6]), and is also computationally expensive.
In our experiments, we combined S HAZOO and WTA with spanning trees generated in different ways
(note that OMV and L AB P ROP do not need to extract spanning trees from the input graph).
Random Spanning Tree (RST). Following Ch. 4 of [12], we draw a weighted spanning tree with
probability proportional to the product of its edge weights. We also tested our algorithms combined
with random spanning trees generated uniformly at random ignoring the edge weights (i.e., the
weights were only used to compute predictions on the randomly generated tree) ?we call these
spanning trees NWRST (no-weight RST). On most graphs, this procedure can be run in time linear
in the number of nodes [17]. Hence, the combinations S HAZOO+NWRST and WTA+NWRST run in
O(|V |) time on most graphs.
Minimum Spanning Tree (MST). This is just the minimal weight spanning tree, where the weight
of a spanning tree is the sum of its edge weights. This is the tree that best approximates the original
graph i.t.o. trace norm distance of the corresponding Laplacian matrices.
Following [10, 6], we also ran S HAZOO and WTA using committees of spanning trees, and then
aggregating predictions via a majority vote. The resulting algorithms are denoted by k*S HAZOO
and k*WTA, where k is the number of spanning trees in the aggregation. We used either k = 7, 11
or k = 3, 7, depending on the dataset size.
For our experiments, we used five datasets: RCV1, USPS, KROGAN, COMBINED, and WEBSPAM. WEBSPAM is a big dataset (110,900 nodes and 1,836,136 edges) of inter-host links created
for the Web Spam Challenge 2008 [15].3 KROGAN (2,169 nodes and 6,102 edges) and COMBINED (2,871 nodes and 6,407 edges) are high-throughput protein-protein interaction networks of
budding yeast taken from [14] ?see [6] for a more complete description. Finally, USPS and RCV1
are graphs obtained from the USPS handwritten characters dataset (all ten categories) and the first
10,000 documents in chronological order of Reuters Corpus Vol. 1 (the four most frequent categories), respectively. In both cases, we used Euclidean 10-Nearest Neighbor to create edges, each
3
We do not compare our results to those obtained within the challenge since we are only exploiting the
graph (weighted) topology here, disregarding content features.
7
2
2
2
weight wi,j being equal to e?kxi ?xj k /?i,j . We set ?i,j
=
squared distance between i and its 10 nearest neighbours.
1
2
?i2 + ?j2 , where ?i2 is the average
Following previous experimental settings [6], we associate binary classification tasks with the five
datasets/graphs via a standard one-vs-all reduction. Each error rate is obtained by averaging over ten
randomly chosen training sets (and ten different trees in the case of RST and NWRST). WEBSPAM
is natively a binary classification problem, and we used the same train/test split provided with the
dataset: 3,897 training nodes and 1,993 test nodes (the remaining nodes being unlabeled).
In the below table, we show the macro-averaged classification error rates (percentages) achieved by
the various algorithms on the first four datasets mentioned in the main text. For each dataset we
trained ten times over a random subset of 5%, 10% and 25% of the total number of nodes and tested
on the remaining ones. In boldface are the lowest error rates on each column, excluding L AB P ROP
which is used as a ?yardstick? comparison. Standard deviations averaged over the binary problems
are small: most of the times less than 0.5%.
Datasets
Predictors
S HAZOO+RST
S HAZOO+NWRST
S HAZOO+MST
WTA + RST
WTA + NWRST
WTA + MST
7*S HAZOO+RST
7*S HAZOO+NWRST
7*WTA+RST
7*WTA+NWRST
11*S HAZOO+RST
11*S HAZOO+NWRST
11*WTA+RST
11*WTA+NWRST
OMV
L AB P ROP
5%
3.62
3.88
1.07
5.34
5.74
1.81
1.68
1.89
2.10
2.33
1.52
1.70
1.84
2.04
24.79
USPS
10%
2.82
3.03
0.96
4.23
4.45
1.60
1.28
1.38
1.56
1.73
1.17
1.27
1.36
1.51
12.34
25%
2.02
2.18
0.80
3.02
3.26
1.21
0.97
1.06
1.14
1.24
0.89
0.98
1.01
1.12
2.10
5%
21.72
21.97
17.71
25.53
25.50
21.07
16.33
16.49
17.44
17.69
15.82
15.95
16.40
16.70
31.65
RCV1
10%
18.70
19.21
14.87
22.66
22.70
17.94
13.52
13.98
14.74
15.18
13.04
13.42
13.95
14.28
22.35
25%
15.68
15.95
11.73
19.05
19.24
13.92
11.07
11.37
12.15
12.53
10.59
10.93
11.42
11.68
11.79
5%
18.11
18.11
17.46
21.82
21.90
21.41
15.54
15.61
16.75
16.71
15.36
15.40
16.20
16.22
43.13
KROGAN
10%
17.68
18.14
16.92
21.05
21.28
20.63
15.58
15.62
16.64
16.60
15.40
15.33
16.15
16.05
38.75
25%
17.10
17.32
16.30
20.08
20.18
19.61
15.46
15.50
15.88
16.00
15.29
15.32
15.53
15.50
29.84
5%
17.77
17.22
16.79
21.76
21.58
21.74
15.12
15.02
16.42
16.24
14.91
14.87
15.90
15.74
44.72
COMBINED
10%
17.24
17.21
16.64
21.38
21.42
21.20
15.24
15.12
16.09
16.13
15.06
14.99
15.58
15.57
40.86
25%
17.34
17.53
17.15
20.26
20.64
20.32
15.84
15.80
15.72
15.79
15.61
15.67
15.30
15.33
33.24
1.95
1.11
0.82
16.28
12.99
10.00
15.56
14.98
15.23
14.79
14.93
15.18
Next, we extract from the above table a specific comparison among S HAZOO, WTA, and L AB P ROP.
S HAZOO and WTA use a single minimum spanning tree (the best performing tree type for both
algorithms). Note that S HAZOO consistently outperforms WTA.
We then report the results on WEBSPAM. S HAZOO and WTA use only non-weighted random spanning trees (NWRST) to optimize scalability. Since this dataset is extremely unbalanced (5.4% positive
labels) we use the average test set F-measure instead of the error rate.
S HAZOO
0.954
WTA
OMV
0.947
0.706
L AB P ROP
0.931
3*WTA
0.967
3*S HAZOO
0.964
7*WTA
0.968
7*S HAZOO
0.968
Our empirical results can be briefly summarized as follows:
1. Without using committees, S HAZOO outperforms WTA on all datasets, irrespective to the type
of spanning tree being used. With committees, S HAZOO works better than WTA almost always,
although the gap between the two reduces.
2. The predictive performance of S HAZOO+MST is comparable to, and sometimes better than, that
of L AB P ROP, though the latter algorithm is slower.
3. k*S HAZOO, with k = 11 (or k = 7 on WEBSPAM) seems to be especially effective, outperforming L AB P ROP, with a small (e.g., 5%) training set size.
4. NWRST does not offer the same theoretical guarantees as RST, but it is extremely fast to generate
(linear in |V | on most graphs ? e.g., [1]), and in our experiments is only slightly inferior to RST.
8
References
[1] N. Alon, C. Avin, M. Kouck?y, G. Kozma, Z. Lotker, and M.R. Tuttle. Many random walks
are faster than one. In Proc. 20th Symp. on Parallel Algo. and Architectures, pages 119?128.
Springer, 2008.
[2] M. Belkin, I. Matveeva, and P. Niyogi. Regularization and semi-supervised learning on large
graphs. In Proceedings of the 17th Annual Conference on Learning Theory, pages 624?638.
Springer, 2004.
[3] Y. Bengio, O. Delalleau, and N. Le Roux. Label propagation and quadratic criterion. In SemiSupervised Learning, pages 193?216. MIT Press, 2006.
[4] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In
Proceedings of the 18th International Conference on Machine Learning. Morgan Kaufmann,
2001.
[5] N. Cesa-Bianchi, C. Gentile, and F. Vitale. Fast and optimal prediction of a labeled tree. In
Proceedings of the 22nd Annual Conference on Learning Theory, 2009.
[6] N. Cesa-Bianchi, C. Gentile, F. Vitale, and G. Zappella. Random spanning trees and the prediction of weighted graphs. In Proceedings of the 27th International Conference on Machine
Learning, 2010.
[7] C. Altafini G. Iacono. Monotonicity, frustration, and ordered response: an analysis of the
energy landscape of perturbed large-scale biological networks. BMC Systems Biology, 4(83),
2010.
[8] M. Herbster and G. Lever. Predicting the labelling of a graph via minimum p-seminorm interpolation. In Proceedings of the 22nd Annual Conference on Learning Theory. Omnipress,
2009.
[9] M. Herbster, G. Lever, and M. Pontil. Online prediction on large diameter graphs. In Advances
in Neural Information Processing Systems 22. MIT Press, 2009.
[10] M. Herbster, M. Pontil, and S. Rojas-Galeano. Fast prediction on a tree. In Advances in Neural
Information Processing Systems 22. MIT Press, 2009.
[11] F.R. Kschischang, B.J. Frey, and H.A. Loeliger. Factor graphs and the sum-product algorithm.
IEEE Transactions on Information Theory, 47(2):498?519, 2001.
[12] R. Lyons and Y. Peres. Probability on trees and networks. Manuscript, 2008.
[13] S.T. McCormick, M.R. Rao, and G. Rinaldi. Easy and difficult objective functions for max cut.
Math. Program., 94(2-3):459?466, 2003.
[14] G. Pandey, M. Steinbach, R. Gupta, T. Garg, and V. Kumar. Association analysis-based transformations for protein interaction networks: a function prediction case study. In Proceedings of
the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
pages 540?549. ACM Press, 2007.
[15] Yahoo! Research (Barcelona) and Laboratory of Web Algorithmics (Univ. of Milan). Web
spam collection. URL: barcelona.research.yahoo.net/webspam/datasets/.
[16] D. A. Spielman and N. Srivastava. Graph sparsification by effective resistances. In Proc. of
the 40th annual ACM symposium on Theory of computing (STOC 2008). ACM Press, 2008.
[17] D.B. Wilson. Generating random spanning trees more quickly than the cover time. In Proceedings of the 28th ACM Symposium on the Theory of Computing, pages 296?303. ACM Press,
1996.
[18] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and
harmonic functions. In Proceedings of the 20th International Conference on Machine Learning, 2003.
9
| 4476 |@word version:3 middle:2 briefly:1 norm:1 proportion:1 seems:1 nd:2 grey:1 propagate:2 galeano:1 sparsifies:1 thereby:2 recursively:2 carry:3 reduction:2 initial:1 contains:3 loeliger:1 document:2 interestingly:1 outperforms:3 existing:1 current:2 comparing:1 yet:3 must:1 ybit:6 reminiscent:2 readily:1 mst:4 subsequent:1 iacono:1 enables:1 civ:1 drop:1 v:1 leaf:1 math:1 node:116 five:2 along:1 symposium:2 dicom:1 descendant:2 doubly:2 overhead:1 introductory:1 symp:1 introduce:3 excellence:1 inter:1 indeed:1 expected:2 behavior:1 growing:1 terminal:1 inspired:1 decomposed:1 lyon:1 increasing:1 provided:1 moreover:1 notation:2 lowest:1 what:1 minimizes:1 proposing:1 sparsification:2 transformation:1 guarantee:3 every:2 chronological:1 tie:1 grant:1 yn:1 positive:4 aggregating:1 frey:1 mistake:21 omv:7 ree:12 path:5 interpolation:1 might:2 black:2 signed:5 therein:2 garg:1 range:1 averaged:2 practical:2 unique:1 yj:7 practice:3 recursive:1 irregularity:2 procedure:3 pontil:2 area:1 empirical:2 matching:1 protein:3 get:2 cannot:3 close:1 unlabeled:7 selection:1 optimize:1 equivalent:1 yt:1 go:1 starting:2 roux:1 rule:1 fill:1 insubria:1 notion:2 pt:12 us:4 steinbach:1 matveeva:1 associate:1 amortized:3 element:1 expensive:2 recognition:1 cut:11 predicts:5 labeled:10 observed:2 fork:20 worst:1 calculate:1 connected:1 ran:1 mentioned:1 broken:1 complexity:1 trained:1 depend:1 tight:1 solving:1 algo:1 predictive:4 negatively:1 learner:6 usps:4 easily:2 various:1 train:3 univ:2 forced:1 fast:9 describe:2 effective:3 fcut:3 distinct:1 labeling:6 outside:1 refined:1 whose:7 encoded:1 larger:1 delalleau:1 drawing:1 otherwise:4 s:1 niyogi:1 transductive:6 nondecreasing:1 itself:1 online:15 obviously:1 advantage:4 net:1 interaction:2 maximal:2 product:4 frequent:1 causing:1 networked:2 relevant:1 j2:1 macro:1 intuitive:1 description:1 milan:4 scalability:1 exploiting:3 parent:1 regularity:6 cluster:6 extending:1 rst:11 categorization:1 unrevealed:4 generating:1 depending:2 linearize:1 stating:2 derive:1 alon:1 measured:1 ij:4 nearest:4 minor:1 eq:1 auxiliary:1 predicted:1 come:1 implies:2 uu:1 thick:1 drawback:1 sgn:7 violates:2 require:1 suffices:1 generalization:3 preliminary:1 biological:2 strictly:1 lying:2 around:1 predict:3 omitted:2 estimation:1 proc:2 label:46 currently:1 bridge:1 largest:1 uninsubria:1 create:1 weighted:53 reflects:1 minimization:2 mit:3 always:4 gaussian:1 modified:1 rather:2 claudio:2 sparsify:1 publication:1 wilson:2 focus:1 improvement:2 consistently:1 greatly:1 adversarial:3 sigkdd:1 baseline:2 typically:1 hidden:1 ancestor:1 labelings:4 i1:1 interested:1 issue:3 classification:8 overall:3 among:1 denoted:1 retaining:2 yahoo:2 rop:11 special:2 equal:1 once:1 never:1 field:1 sampling:2 bmc:1 biology:1 nearly:6 throughput:1 report:2 np:1 belkin:1 randomly:2 neighbour:1 phase:3 ab:11 detection:1 mining:1 light:2 beforehand:2 edge:42 closer:1 tree:98 old:1 euclidean:1 walk:1 re:2 theoretical:3 minimal:1 column:1 tuttle:1 rao:1 cover:1 w1i:1 retains:1 measuring:1 assignment:2 cost:4 deviation:1 subset:4 predictor:2 wonder:2 too:1 optimally:1 reported:1 perturbed:1 accomplish:1 kxi:1 combined:5 international:4 sensitivity:1 randomized:1 herbster:3 sequel:1 connecting:3 quickly:2 together:1 squared:1 frustration:1 cesa:4 lever:2 opposed:1 containing:3 sidestep:1 yp:1 account:2 singleton:1 star:3 summarized:1 includes:1 inc:1 depends:3 vi:4 view:2 tiv:2 start:1 recover:2 competitive:1 aggregation:1 parallel:1 contribution:1 minimize:1 kaufmann:1 efficiently:2 yield:1 landscape:1 handwritten:1 multiplying:1 confirmed:2 whenever:1 definition:1 against:2 competitor:1 energy:3 involved:1 naturally:1 associated:1 proof:1 dataset:6 recall:4 knowledge:2 manuscript:1 supervised:2 follow:1 response:1 done:1 though:2 just:3 hand:3 web:4 propagation:6 google:2 reveal:1 yeast:1 semisupervised:1 seminorm:1 building:1 contain:2 inductive:1 hence:8 assigned:1 regularization:1 excluded:1 logaritmic:2 satisfactory:2 laboratory:1 i2:2 adjacent:3 during:3 inferior:1 rooted:3 linearizing:2 generalized:2 criterion:1 stress:1 complete:1 performs:2 omnipress:1 harmonic:1 recently:1 common:2 superior:1 pseudocode:2 rl:6 ji:3 linking:2 occurred:1 approximates:2 association:1 mathematics:1 nicolo:1 closest:5 italy:4 optimizing:1 forcing:1 scenario:2 certain:1 binary:5 arbitrarily:2 outperforming:1 yi:5 seen:4 minimum:6 gentile:4 morgan:1 ii:5 semi:2 full:1 reduces:4 technical:1 match:1 adapt:1 calculation:1 faster:2 long:3 offer:1 host:1 visit:9 award:1 specializing:2 laplacian:1 calculates:1 prediction:25 scalable:2 basic:2 halving:4 metric:2 expectation:1 sometimes:2 achieved:2 preserved:1 whereas:2 want:1 else:1 standpoint:1 extra:1 unlike:2 induced:1 subject:1 undirected:1 contrary:1 lafferty:1 leveraging:1 effectiveness:1 seem:1 integer:1 call:3 extracting:2 structural:1 linearizes:1 presence:1 counting:2 revealed:11 exceed:2 enough:1 easy:4 split:1 affect:1 xj:1 bengio:1 architecture:1 topology:1 suboptimal:1 idea:1 simplifies:1 knowing:1 whether:3 url:1 suffer:1 resistance:7 cause:1 jj:1 remark:3 detailed:1 covered:1 amount:2 dark:1 ten:4 category:2 diameter:2 generate:1 percentage:1 dotted:2 sign:3 disjoint:3 per:4 vol:1 key:1 four:2 blum:1 achieving:1 yit:5 prevent:1 breadth:1 kept:1 rinaldi:1 graph:61 sum:6 run:8 extends:1 almost:2 draw:3 comparable:2 entirely:1 bound:28 guaranteed:1 replaces:1 fascinating:1 quadratic:2 encountered:1 annual:4 nontrivial:2 precisely:2 throwing:1 constraint:3 shazoo:2 dominated:1 argument:2 optimality:2 min:2 extremely:2 performing:3 rcv1:3 kumar:1 relatively:1 marking:2 according:3 combination:1 across:1 slightly:1 cutsize:11 character:2 wi:9 wta:40 making:1 explained:1 heart:1 taken:2 computationally:2 equation:1 previously:2 slack:2 count:2 describing:1 turn:1 needed:1 know:1 committee:3 end:2 available:5 observe:2 away:1 appropriate:1 spectral:1 chawla:1 batch:6 slower:1 original:1 running:4 include:1 remaining:3 mincut:4 hinge:25 calculating:1 exploit:3 ghahramani:1 especially:1 objective:1 already:1 quantity:4 strategy:4 dependence:1 fabio:2 distance:8 link:1 majority:2 extent:2 trivial:1 spanning:24 reason:1 induction:1 boldface:1 length:2 kk:1 ratio:1 kouck:1 difficult:1 potentially:1 relate:1 stoc:1 trace:1 negative:2 design:2 implementation:13 unknown:2 perform:4 bianchi:4 upper:4 budding:1 mccormick:1 datasets:7 displayed:1 peres:1 extended:2 excluding:1 precise:1 y1:1 arbitrary:4 pair:2 required:2 namely:1 connection:16 algorithmics:1 barcelona:2 avin:1 able:2 suggested:1 adversary:1 below:1 challenge:2 program:1 max:2 pascal2:1 webspam:6 suitable:1 zappella:3 natural:1 force:1 predicting:11 zhu:1 created:1 specializes:1 miny0:1 irrespective:1 extract:2 text:2 circled:2 discovery:1 nicol:1 evolve:1 fully:2 dsi:2 permutation:2 highlight:1 limitation:2 proportional:4 enclosed:1 incident:2 degree:1 consistent:6 principle:1 heavy:1 supported:1 parity:1 free:1 last:1 side:1 bias:6 understand:1 vv:4 perceptron:1 neighbor:3 depth:4 default:2 giovanni:2 world:6 unweighted:17 calculated:1 computes:1 author:2 made:3 collection:1 spam:3 ec:1 far:5 social:1 transaction:1 obtains:1 confirm:1 monotonicity:1 sequentially:1 conceptual:1 corpus:1 conclude:1 krogan:3 pandey:1 table:2 nature:1 kschischang:1 ignoring:1 obtaining:2 forest:1 necessarily:1 domain:3 protocol:1 vj:1 main:2 arrow:1 big:1 reuters:1 fair:1 child:5 natively:1 lie:1 governed:1 third:2 theorem:10 removing:1 specific:2 disregarding:2 gupta:1 exists:3 importance:2 ci:1 linearization:1 labelling:1 budget:1 illustrates:1 hole:1 subtree:4 occurring:1 gap:3 easier:2 generalizing:1 led:1 lt:4 simply:4 logarithmic:1 likely:1 backtracking:3 prevents:1 expressed:1 contained:5 ordered:1 springer:2 ch:1 loses:1 frustrated:3 acm:6 kozma:1 viewed:4 goal:2 marked:3 presentation:2 rojas:1 content:1 hard:1 specifically:1 operates:1 unimi:3 sampler:2 reducing:1 uniformly:1 degradation:1 lemma:2 called:2 total:11 averaging:1 mincuts:1 experimental:2 vitale:4 vote:3 rarely:1 internal:3 mark:1 latter:1 unbalanced:1 dissimilar:1 bioinformatics:1 yardstick:1 spielman:1 dept:1 tested:3 srivastava:1 |
3,840 | 4,477 | Monte Carlo Value Iteration with Macro-Actions
Zhan Wei Lim
David Hsu
Wee Sun Lee
Department of Computer Science, National University of Singapore
Singapore, 117417, Singapore
Abstract
POMDP planning faces two major computational challenges: large state spaces
and long planning horizons. The recently introduced Monte Carlo Value Iteration (MCVI) can tackle POMDPs with very large discrete state spaces or continuous state spaces, but its performance degrades when faced with long planning
horizons. This paper presents Macro-MCVI, which extends MCVI by exploiting macro-actions for temporal abstraction. We provide sufficient conditions for
Macro-MCVI to inherit the good theoretical properties of MCVI. Macro-MCVI
does not require explicit construction of probabilistic models for macro-actions
and is thus easy to apply in practice. Experiments show that Macro-MCVI substantially improves the performance of MCVI with suitable macro-actions.
1
Introduction
Partially observable Markov decision process (POMDP) provides a principled general framework for
planning with imperfect state information. In POMDP planning, we represent an agent?s possible
states probabilistically as a belief and systematically reason over the space of all beliefs in order
to derive a policy that is robust under uncertainty. POMDP planning, however, faces two major
computational challenges. The first is the ?curse of dimensionality?. A complex planning task
involves a large number of states, which result in a high-dimensional belief space. The second
obstacle is the ?curse of history?. In applications such as robot motion planning, an agent often
takes many actions before reaching the goal, resulting in a long planning horizon. The complexity
of the planning task grows very fast with the horizon.
Point-based approximate algorithms [10, 14, 9] have brought dramatic progress to POMDP planning. Some of the fastest ones, such as HSVI [14] and SARSOP [9], can solve moderately complex
POMDPs with hundreds of thousands states in reasonable time. The recently introduced Monte
Carlo Value Iteration (MCVI) [2] takes one step further. It can tackle POMDPs with very large discrete state spaces or continuous state spaces. The main idea of MCVI is to sample both an agent?s
state space and the corresponding belief space simultaneously, thus avoiding the prohibitive computational cost of unnecessarily processing these spaces in their entirety. It uses Monte Carlo sampling
in conjunction with dynamic programming to compute a policy represented as a finite state controller. Both theoretical analysis and experiments on several robotic motion planning tasks indicate
that MCVI is a promising approach for plannning under uncertainty with very large state spaces, and
it has already been applied successfully to compute the threat resolution logic for aircraft collision
avoidance systems in 3-D space [1].
However, the performance of MCVI degrades, as the planning horizon increases. Temporal abstraction using macro-actions is effective in mitigating the negative effect and has achieved good
results in earlier work on Markov decision processes (MDPs) and POMDPs (see Section 2). In this
work, we show that macro-actions can be seamlessly integrated into MCVI, leading to the MacroMCVI algorithm. Unfortunately, the theoretical properties of MCVI, such as the approximation error
bounds [2], do not carry over to Macro-MCVI automatically, if arbitrary mapping from belief to actions are allowed as macro-actions. We give sufficient conditions for the good theoretical properties
1
to be retained, tranforming POMDPs into a particular type of partially observable semi-Markov
decision processes (POSMDPs) in which the lengths of macro-actions are not observable.
A major advantage of the new algorithm is its ability to abstract away the lengths of macro-actions in
planning and reduce the effect of long planning horizons. Furthermore, it does not require explicit
probabilistic models for macro-actions and treats them just like primitive actions in MCVI. This
simplifies macro-action construction and is a major benefit in practice. Macro-MCVI can also be
used to construct a hierarchy of macro-actions for planning large spaces. Experiments show that the
algorithm is effective with suitably designed macro-actions.
2
Related Works
Macro-actions have long been used to speed up planning and learning algorithms for MDPs (see,
e.g., [6, 15, 3]). Similarly, they have been used in offline policy computation for POMDPs [16, 8].
Macro-actions can be composed hierarchically to further improve scalability [4, 11]. These earlier
works rely on vector representations for beliefs and value functions, making it difficult to scale up to
large state spaces. Macro-actions have also been used in online search algorithms for POMDPs [7].
Macro-MCVI is related to Hansen and Zhou?s work [5]. The earlier work uses finite state controllers
for policy representation and policy iteration for policy computation, but it has not yet been shown
to work on large state spaces. Expectation-maximization (EM) can be used to train finite state
controllers [17] and potentially handle large state spaces, but it often gets stuck in local optima.
3
Planning with Macro-actions
We would like to generalize POMDPs to handle macro-actions. Ideally, the generalization should
retain properties of POMDPs such as piecewise linear and convex finite horizon value functions. We
would also like the approximation bounds for MCVI [2] to hold with macro-actions.
We would like to allow our macro-actions to be as powerful as possible. A very powerful representation for a macro-action would be to allow it to be an arbitrary mapping from belief to action
that will run until some termination condition is met. Unfortunately, the value function of a process
with such macro-actions need not even be continuous. Consider the following simple finite horizon example, with horizon one. Assume that there are two primitive actions, both with constant
rewards, regardless of state. Consider two macro-actions, one which selects the poorer primitive
action all the time while the other which selects the better primitive action for some beliefs. Clearly,
the second macro-action dominates the first macro-action over the entire belief space. The reward
for the second macro-action takes two possible values depending on which action is selected for the
belief. The reward function also forms the optimal value function of the process and need not even
be continuous as the macro-action can be an arbitrary mapping from belief to action.
Next, we give sufficient conditions for the process to retain piecewise linearity and convexity of
the value function. We do this by constructing a type of partially observable semi-Markov decision
process (POSMDP) with the desired property. The POSMDP does not need to have the length of
the macro-action observed, a property that can be practically very useful as it allows the branching
factor for search to be significantly smaller. Furthermore, the process is a strict generalization of a
POMDP as it reduces to a POMDP when all the macro-actions have length one.
3.1 Partially Observable Semi-Markov Decision Process
Finite-horizon (undiscounted) POSMDP were studied in [18]. Here, we focus on a type of infinitehorizon discounted POSMDPs whose transition intervals are not observable. Our POSMDP is formally defined as a tuple (S, A, O, T, R, ?), where S is a state space, A is a macro-action space,
O is a macro-observation space, T is a joint transition and observation function, R is a reward
function, and ? ? (0, 1) is a discount factor. If we apply a macro-action a with start state si ,
T = p(sj , o, k|si , a) encodes the joint conditional probability of the end state sj , macro-observation
o, and the number of time steps k that it takes for a to reach sj from si . We could decompose T
into a state-transition function and an observation function, but avoid doing so here to remain general and simplify the notation. The reward function
P?R gives the discounted cumulative reward for a
macro-action a that starts at state s: R(s, a) = t=0 ? t E(rt |s, a), where E(rt |s, a) is the expected
reward at step t. Here we assume that the reward is 0 once a macro-action terminates.
For convenience, we will work with reweighted beliefs, instead of beliefs. Assuming that the number
of states is n, a reweighted belief (like a belief) is a vector of n non-negative numbers that sums to
2
one. By assuming that the POSMDP process will stop with probability 1?? at each time step, we can
interpret the reweighted belief as the conditional probability of a state given that the process has not
stopped. This gives an interpretation of the reweighted belief in terms of the discount factor. Given
a reweighted belief, we compute the next reweighted belief given macroaction a and observation o,
b0 = ? (b, a, o), as follows:
P? k?1 Pn
?
p(s, o, k|si , a)b(si )
Pn i=1
Pn
b0 (s) = P? k=1
.
(1)
k?1
?
j=0
i=1 p(sj , o, k|si , a)b(si )
k=1
We
simply
the reweighted belief as a belief from here on. We denote the denominator
P?will k?1
Pnrefer
Pto
n
?
k=1
j=0
i=1 p(sj , o, k|si , a)b(si ) by p? (o|a, b). The value of ?p? (o|a, b) can be interpreted
as
the
probability
that observation o is received and the POSMDP has not stopped. Note that
P
p
(o|a,
b)
may
sum
to
less than 1 due to discounting.
?
o
P
A policy ? is a mapping from a belief to a macro-action. Let R(b, a) = s b(s)R(s, a). The value
of a policy ? can be defined recursively as
X
V? (b) = R(b, ?(b)) + ?
p? (o|?(b), b)V? (? (b, ?(b), o)).
o
Note that the policy operates on the belief and may not know the number of steps taken by the
macro-actions. If knowledge of the number of steps is important, it can be added into the observation
function in the modeling process.
We now define the backup operator H that operates on a value function Vm and returns Vm+1
X
p? (o|a, b)V (? (b, a, o)) .
HV (b) = max R(b, a) + ?
a
(2)
o?O
The backup operator is a contractive mapping1 .
Lemma 1 Given value functions U and V , ||HU ? HV ||? ? ?||U ? V ||? .
Let the value of an optimal policy, ? ? , be V ? . The following theorem is a consequence of the Banach
fixed point theorem and Lemma 1.
Theorem 1 V ? is the unique fixed point of H and satisfies the Bellman equation V ? = HV ? .
We call a policy an m-step policy if the number of times the macro-actions is applied is m. For
m-step policies, V ? can be approximated by a finite set of linear functions; the weight vectors of
these linear functions are called the ?-vectors.
Theorem 2 The value function for an m-step policy is piecewise linear and convex and can be
represented as
X
Vm (b) = max
?(s)b(s)
(3)
???m
s?S
where ?m is a finite collection of ?-vectors.
As Vm is convex and converges to V ? , V ? is also convex.
3.2 Macro-action Construction
We would like to construct macro-actions from primitive actions of a POMDP in order to use temporal abstraction to help solve difficult POMDP problems. A partially observable Markov decision
process (POMDP) is defined by finite state space S, finite action space A, a reward function R(s, a),
an observation space O, and a discount ? ? (0, 1).
In our POSMDP, the probability function p(sj , o, k|si , a) for a macro-action must be independent
of the history given the current state si ; hence the selection of primitive actions and termination
conditions within the macro-action cannot depend on the belief. We examine some allowable dependencies here. Due to partial observability, it is often not possible to allow the primitive action and
the termination condition to be functions of the initial state. Dependence on the portion of history
1
Proofs of the results in this section are included in the supplementary material.
3
that occurs after the macro-action has started is, however, allowed. In some POMDPs, a subset of
the state variables are always observed and can be used to decide the next action. In fact, we may
sometimes explicitly construct observed variables to remember relevant parts of the history prior to
the start of macro-action (see Section 5); these can be considered as parameters that are passed on to
the macro-action. Hence, one way to construct the next action in a macro-action is to make it a function of the history since the macro-action started, xk , ak , ok+1 , . . . , xt?1 , at?1 , ot , xt , where xi is
the fully observable subset of state variables at time i, and k is the starting time of the macro-action.
Similarly, when the termination criterion and the observation function of the macro-action depends
only on the history xk , ak , ok+1 , . . . , xt?1 , at?1 , ot , xt , the macro-action can retain a transition
function that is independent of the history given the initial state. Note that the observation to be
passed on to the POSMDP to create the POSMDP observation space, O, is part of the design tradeoff - usually it is desirable to reduce the number of observations in order to reduce complexity
without degrading the value of the POSMDP too much. In particular, we may not wish to include
the execution length of the macro-action if it does not contribute much towards obtaining a good
policy.
4
Monte Carlo Value Iteration with Macro-Actions
We have shown that if the action space A and the observation space O of a POSMDP are discrete,
then the optimal value function V ? can be approximated arbitrarily closely by a piecewise-linear,
convex function. Unfortunately, when S is very high-dimensional (or continuous), a vector representation is no longer effective. In this section, we show how the Monte Carlo Value Iteration (MCVI)
algorithm [2], which has been designed for POMDPs with very large or infinite state spaces, can be
extended to POSMDP.
Instead of ?-vectors, MCVI uses an alternative policy representation called a policy graph G. A
policy graph is a directed graph with labeled nodes and edges. Each node of G is labeled with an
macro-action a and each edge of G is labeled with an observation o. To execute a policy ?G , it
is treated as a finite state controller whose states are the nodes of G. Given an initial belief b, a
starting node v of G is selected and its associated macro-action av is performed. The controller
then transitions from v to a new node v 0 by following the edge (v, v 0 ) labeled with the observation
received, o. The process then repeats with the new controller node v 0 .
Let ?G,v denote a policy represented by G, when the controller always starts in node v of G. We
define the value ?v (s) to be the expected total reward of executing ?G,v with initial state s. Hence
X
VG (b) = max
?v (s)b(s).
(4)
v?G
s?S
VG is completely determined by the ?-functions associated with the nodes of G.
4.1 MC-Backup
One way to approximate the value function is to repeatedly run the backup operator H starting
from an arbitrary value function until it is close to convergence. This algorithm is called value
iteration (VI). Value iteration can be carried out on policy graphs as well, as it provides an implicit
representation of a value function. Let VG be the value function for a policy graph G. Substituting
(4) into (2), we get
nX
o
X
X
HVG (b) = max
R(s, a)b(s) +
p? (o|a, b) max
?v (s)b0 (s) .
(5)
a?A
s?S
o?O
v?G
s?S
It is possible to then evaluate the right-hand side of (5) via sampling and monte carlo simulation at a
? b VG . This is called MC-backup
belief b. The outcome is a new policy graph G0 with value function H
of G at b (Algorithm 1) [2].
There are |A||G||O| possible ways to generate a new policy graph G0 which has one new node
compared to the old policy graph node. Algorithm 1 computes an estimate of the best new policy
graph at b using only N |A||G| samples. Furthermore, we can show that MC-backup approximates
?
the standard VI backup (equation (5)) well at b, with error decreasing at the rate O(1/ N ). Let
Rmax be the largest absolute value of the reward, |rt |, at any time step.
4
Algorithm 1 MC-Backup of a policy graph G at a belief b ? B with N samples.
MC-BACKUP(G, b, N )
1: For each action a ? A, Ra ? 0.
2: For each action a ? A, each observation o ? O, and each node v ? G, Va,o,v ? 0.
3: for each action a ? A do
4:
for i = 1 to N do
5:
Sample a state si with probability b(si ).
6:
Simulate taking macro-action a in state si . Generate a new state s0i , observation oi , and discounted
reward R0 (si , a) by sampling from p(sj , o, k|si , a).
7:
Ra ? Ra + R0 (si , a).
8:
for each node v ? G do
9:
Set V 0 to be the expected total reward of simulating the policy represented by G, with initial
controller state v and initial state s0i .
10:
Va,oi ,v ? Va,oi ,v + V 0 .
11:
for each observation o ? O do
12:
Va,o ? maxv?G Va,o,v .
13:
va,o ? argmax
Pv?G Va,o,v .
14:
Va ? (Ra + ? o?O Va,o )/N .
15: V ? ? maxa?A Va .
16: a? ? argmaxa?A Va .
17: Create a new policy graph G0 by adding a new node u to G. Label u with a? . For each o ? O, add the
edge (u, va? ,o ) and label it with o.
18: return G0 .
Theorem 3 Given a policy graph G and a point b ? B, MC-BACKUP(G, b, N ) produces an improved policy graph such that
s
2 |O| ln |G| + ln(2|A|) + ln(1/? )
2R
max
?
|Hb VG (b) ? HVG (b)| ?
,
1??
N
with probability at least 1 ? ? .
The proof uses Hoeffding bound together with union bound. Details can be found in [2].
MC-backup can be combined with point-based POMDP planning, which samples the belief space
B. Point-based POMDP algorithms use a set B of points sampled from B as an approximate representation of B. In contrast to the standard VI backup operator H, which performs backup at every
? B applies MC-BACKUP(Gm , b, N ) on a policy graph Gm at every point
point in B, the operator H
? B then produces a new policy graph Gm+1 by
in B. This results in |B| new policy graph nodes. H
adding the new policy graph nodes to the previous policy graph Gm .
Let ?B = supb?B minb0 ?B kb ? b0 k1 be the maximum L1 distance from any point in B to the closest
? B Vm . The theorem
point in B. Let V0 be value function for some initial policy graph and Vm+1 = H
below bounds the approximation error between Vm and the optimal value function V ? .
Theorem 4 For every b ? B,
s
2Rmax
|V ? (b) ? Vm (b)| ?
(1 ? ?)2
2 |O| ln(|B|m) + ln(2|A|) + ln(|B|m/? )
2Rmax
2? m Rmax
+
?
+
,
B
N
(1 ? ?)2
(1 ? ?)
with probability at least 1 ? ? .
The proof requires the contraction property and a Lipschitz property that can be derived from the
piece-wise linearity of the value function. Having established those results in Section 3.1, the rest
of the proof follows from the proof in [2]. The first term in the bound in?Theorem 4 comes from
Theorem 3, showing that the error from sampling decays at the rate O(1/ N ) and can be reduced
by taking a large enough sample size. The second term depends on how well the set B covers B
and can be reduced by sampling a larger number of beliefs. The last term depends on the number of
MC-backup iterations and decays exponentially with m.
5
(a)
(b)
(c)
Figure 1: (a) Underwater Navigation: A reduced map with a 11 ? 12 grid is shown with ?S? marking the
possible initial positions, ?D? marking the destinations, ?R? marking the rocks and ?O? marking the locations
where the robot can localize completely. (b) Collaborative search and capture: Two robotic agents catching 12
escaped crocodiles in a 21 ? 21 grid. (c) Vehicular ad-hoc networking: An UAV maintains ad-hoc network
over four ground vehicles in a 10 ? 10 grid with ?B? marking the base and ?D? the destinations.
4.2 Algorithm
Theorem 4 bounds the performance of the algorithm when given a set of beliefs. Macro-MCVI,
like MCVI, samples beliefs incrementally in practice and performs backup at the sampled beliefs.
Branch and bound is used to avoid sampling unimportant parts of the belief space. See [2] for details.
The other important component in a practical algorithm is the generation of next belief; MacroMCVI uses a particle filter for that. Given the macro-action construction as described in Section 3.2,
a simple particle filter is easily implemented to approximate the next belief function in equation (1):
sample a set of states from the current belief; from each sampled state, simulate the current macroaction until termination, keeping track of its path length, t; if the observation at termination matches
the desired observation, keep the particle; the set of particles that are kept are weighted by ? t and
then renormalized to form the next belief2 . Similarly, MC-backup is performed by simply running
simulations of the macro-actions - there is no need to store additional transition and observation
matrices, allowing the method to run for very large state spaces.
5
Experiments
We now illustrate the use of macro-actions for temporal abstraction in three POMDPs of varying
complexity. Their state spaces range from relatively small to very large. Correspondingly, the
macro-actions range from relatively simple ones to much more complex ones forming a hierarchy.
Underwater Navigation: The underwater navigation task was introduced in [9]. In this task, an
autonomous underwater vehicle (AUV) navigates in an environment modeled as 51 x 52 grid map.
The AUV needs to move from the left border to the right border while avoiding the rocks scattered
near its destination. The AUV has six actions: move north, move south, move east, move north-east,
move south-east or stay in the same location. Due to poor visibility, the AUV can only localize itself
along the top or bottom borders where there are beacon signals.
This problem has several interesting characteristics. First, the relatively small state space size of
2653 means that solvers that use ?-vectors, such as SARSOP [9] can be used. Second, the dynamics
of the robot is actually noiseless, hence the main difficulty is actually localization from the robot?s
initially unknown location.
We use 5 macro-actions that move in a direction (north, south, east, north-east, or south-east) until
either a beacon signal or the destination is reached. We also define an additional macro-action that:
navigates to the nearest goal location if the AUV position is known, or simply stays in the same
location if the AUV position is not known. To enable proper behaviour of the last macro-action,
we augment the state space with a fully observable state variable that indicates the current AUV
location. The variable is initialized to a value denoting ?unknown? but takes the value of the current
AUV location after the beacon signal is received. This gives a simple example where the original
state space is augmented with a fully observable state variable to allow more sophisticated macroaction behaviour.
2
More sophisticated approximation of the belief can be constructed but may require more knowledge of the
underlying POMDP and more computation.
6
Collaborative Search and Capture: In this problem, a group of crocodiles had escaped from its
enclosure into the environment and two robotic agents have to collaborate to hunt down and capture
the crocodiles (see Figure 1). Both agents are centrally controlled and each agent can make a one
step move in one of the four directions (north, south, east and west) or stay still at each time instance.
There are twelve crocodiles in the environment. At every time instance, each crocodile moves to
a location furthest from the agent that is nearest to it with a probability 1 ? p (p = 0.05 in the
experiments). With a probability p, the crocodile moves randomly. A crocodile is captured when
it is at the same location as an agent. The agents do not know the exact location of the crocodiles,
but each agent knows the number of crocodiles in the top left, top right, bottom left and bottom
right quadrants around itself from the noise made by the crocodiles. Each captured crocodile gives
a reward of 10, while movement is free.
We define twenty-five macro actions where each agent moves (north, south, east, west, or stay) along
a passage way until one of them reaches an intersection. In addition, the macro-actions only return
the observation it makes at the point when the macro-action terminates, reducing the complexity
of the problem, possibly at a cost of some sub-optimality. In this problem, the macro-actions are
simple, but the state space is extremely large (approximately 17914 ).
Vehicular Ad-hoc Network: In a post disaster search and rescue scenario, a group of rescue vehicles are deployed for operation work in an area where communication infrastructure has been
destroyed. The rescue units need high-bandwidth network to relay images of ground situations. An
Unmanned Aerial Vehicle (UAV) can be deployed to maintain WiFi network communication between the ground units. The UAV needs to visit each vehicle as often as possible to pick up and
deliver data packets [13].
In this task, 4 rescue vehicles and 1 UAV navigates in a terrain modeled as a 10 x 10 grid map. There
are obstacles on the terrain that are impassable to ground vehicle but passable to UAV. The UAV can
move in one of the four directions (north, south, east, and west) or stay in the same location at every
time step. The vehicles set off from the same base and move along some predefined path towards
their pre-assigned destinations where they will start their operations, randomly stopping along the
way. Upon reaching its destination, the vehicle may roam around the environment randomly while
carrying out its mission. The UAV knows its own location on the map and can observe the location
of a vehicle if they are in the same grid square. To elicit a policy with low network latency, there
is a penalty of ?0.1? number of time steps since last visit of a vehicle for each time step for each
vehicle. There is a reward of 10 for each time a vehicle is visited by the UAV. The state space
consists of the vehicles? locations, UAV location in the grid map and the number of time steps since
each vehicle is last seen (for computing the reward).
We abstract the movements of UAV to search and visit a single vehicle as macro actions. There
are two kinds of search macro actions for each vehicle: search for a vehicle along its predefined
path and search for a vehicle that has started to roam randomly. To enable the macro-actions to
work effectively, the state space is also augmented with the previous seen location of each vehicle.
Each macro-action is in turn hierarchically constructed by solving the simplified POMDP task of
searching for a single vehicle on the same map using basic actions and some simple macro-actions
that move along the paths. This problem has both complex hierarchically constructed macro-actions
and very large state space.
5.1
Experimental setup
We applied Macro-MCVI to the above tasks and compared its performance with the original MCVI
algorithm. We also compared with a state-of-the-art off-line POMDP solver, SARSOP [9], on the
underwater navigation task. SARSOP could not run on the other two tasks, due to their large state
space sizes. For each task, we ran Macro-MCVI until the average total reward stablized. We then ran
the competing algorithms for at least the same amount of time. The exact running times are difficult
to control because of our implementation limitations. To confirm the comparison results, we also
ran the competing algorithms 100 times longer when possible. All experiments were conducted on
a 16 core Intel Xeon 2.4Ghz computer server.
Neither MCVI nor SARSOP uses macro-actions. We are not aware of other efficient off-line macroaction POMDP solvers that have been demonstrated on very large state space problems. Some online
search algorithms, such as PUMA [7], use macro-actions and have shown strong results. Online
search algorithms do not generate a policy, making a fair comparison difficult. Despite that, they
7
are useful as baseline references; we implement a variant of PUMA as a one such reference. In our
experiments, we simply gave the online search algorithms as much or more time than Macro-MCVI
and report the results here. PUMA uses open-loop macro-actions. As a baseline reference for online
solvers with closed-loop macro-actions, we also created an online search variant of Macro-MCVI
by removing the MC-backup component. We refer to this variant as Online-Macro. It is similar to
other recent online POMDP algorithms [12], but uses the same closed-loop macro-actions as MCVI
does.
5.2 Results
The performance of the different algorithms is shown
in Figure 2 with 95% confidence intervals.
The underwater navigation task consist of two phases:
the localization phase and navigate to goal phase.
Macro-MCVI?s policy takes one macro-action, ?moving northeast until reaching the border?, to localize
and another macro-action, ?navigating to the goal?, to
reach the goal. In contrast, both MCVI and SARSOP
fail to match the performance of Macro-MCVI even
when they are run 100 times longer. Online-Macro
does well, as the planning horizon is short with the
use of macro-actions. PUMA, however, does not do
as well, as it uses the less powerful open-loop macroactions, which move in the same direction for a fixed
number of time steps.
Figure 2: Performance comparison.
Reward Time(s)
Underwater Navigation
Macro-MCVI
749.30 ? 0.28
1
MCVI
678.05 ? 0.48
4
725.28 ? 0.38
100
SARSOP
710.71 ? 4.52
1
730.83 ? 0.75
100
PUMA
697.47 ? 4.58
1
Online-Macro
746.10 ? 2.37
1
Collaborative Search & Capture
Macro-MCVI
17.04 ? 0.03
120
MCVI
13.14 ? 0.04
120
16.38 ? 0.05
12000
PUMA
1.04 ? 0.91
144
Online-Macro
0
3657
Vehicular Ad-Hoc Network
Macro-MCVI
-323.55 ? 3.79
29255
MCVI
-1232.57 ? 2.24
29300
Greedy
-422.26 ? 3.98
28800
For the collaborative search & capture task, MCVI
fails to match the performance of Macro-MCVI even
when it is run for 100 times longer. PUMA and
Online-Macro do badly as they fail to search deep
enough and do not have the benefit of reusing sub-policies obtained from the backup operation.
To confirm that it is the backup operation and not the shorter per macro-action time that is responsible for the performance difference, we ran Online-Macro for a much longer time and found the
result unchanged.
The vehicular ad-hoc network task was solved hierarchically in two stages. We first used MacroMCVI to solve for the policy that finds a single vehicle. This stage took roughly 8 hours of computation time. We then used the single-vehicle policy as a macro-action and solved for the higher-level
policy that plans over the macro-actions. Although it took substantial computation time, MacroMCVI generated a reasonable policy in the end. In constrast, MCVI, without macro-actions, fails
badly for this task. Due to the long running time involved, we did not run MCVI 100 times longer.
To confirm that that the policy computed by Macro-MCVI at the higher level of the hierarchy is also
effective, we manually crafted a greedy policy over the single-vehicle macro-actions. This greedy
policy always searches for the vehicle that has not been visited for the longest duration. The experimental results indicate that the higher-level policy computed by Macro-MCVI is more effective than
the greedy policy. We did not apply online algorithms to this task, as we are not aware of any simple
way to hierarchically construct macro-actions online.
6
Conclusions
We have successfully extended MCVI, an algorithm for solving very large state space POMDPs,
to include macro-actions. This allows MCVI to use temporal abstraction to help solve difficult
POMDP problems. The method inherits the good theoretical properties of MCVI and is easy to
apply in practice. Experiments show that it can substantially improve the performance of MCVI
when used with appropriately chosen macro-actions.
Acknowledgement We thank Tom?s Lozano-P?rez and Leslie Kaelbling from MIT for many insightful discussions. This work is supported in part by MoE grant MOE2010-T2-2-071 and MDA
GAMBIT grant R-252-000-398-490.
8
References
[1] H. Bai, D. Hsu, M.J. Kochenderfer, and W. S. Lee. Unmanned aircraft collision avoidance
using continuous-state POMDPs. In Proc. Robotics: Science & Systems, 2011.
[2] H. Bai, D. Hsu, W. S. Lee, and V. Ngo. Monte Carlo Value Iteration for Continuous-State
POMDPs. In Algorithmic Foundations of Robotics IX?Proc. Int. Workshop o n the Algorithmic
Foundations of Robotics (WAFR), pages 175?191. Springer, 2011.
[3] Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement
learning. Discrete Event Dynamic Systems, 13:2003, 2003.
[4] T. G. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artificial Intelligence Research, 13:227?303, 2000.
[5] E. Hansen and R. Zhou. Synthesis of hierarchical finite-state controllers for POMDPs. In Proc.
Int. Conf. on Automated Planning and Scheduling, 2003.
[6] M. Hauskrecht, N. Meuleau, L.P. Kaelbling, T. Dean, and C. Boutilier. Hierarchical solution
of Markov decision processes using macro-actions. In Proc. Conf. on Uncertainty in Artificial
Intelligence, pages 220?229. Citeseer, 1998.
[7] R. He, E. Brunskill, and N. Roy. PUMA: Planning under uncertainty with macro-actions. In
Proc. AAAI Conf. on Artificial Intelligence, 2010.
[8] H. Kurniawati, Y. Du, D. Hsu, and W. S. Lee. Motion planning under uncertainty for robotic
tasks with long time horizons. Int. J. Robotics Research, 30(3):308?323, 2010.
[9] H. Kurniawati, D. Hsu, and W.S. Lee. SARSOP: Efficient point-based POMDP planning by
approximating optimally reachable belief spaces. In Proc. Robotics: Science & Systems, 2008.
[10] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for
POMDPs. In Int. Jnt. Conf. on Artificial Intelligence, volume 18, pages 1025?1032, 2003.
[11] J. Pineau, N. Roy, and S. Thrun. A hierarchical approach to POMDP planning and execution.
In Workshop on Hierarchy & Memory in Reinforcement Learning (ICML), volume 156, 2001.
[12] S. Ross, J. Pineau, S. Paquet, and B. Chaib-Draa. Online planning algorithms for POMDPs.
Journal of Artificial Intelligence Research, 32(1):663?704, 2008.
[13] A. Sivakumar and C.K.Y. Tan. UAV swarm coordination using cooperative control for establishing a wireless communications backbone. In Proc. Int. Conf. on Autonomous Agents &
Multiagent Systems, pages 1157?1164, 2010.
[14] T. Smith and R. Simmons. Heuristic search value iteration for POMDPs. In Proc. Conf. on
Uncertainty in Artificial Intelligence, pages 520?527. AUAI Press, 2004.
[15] R.S. Sutton, D. Precup, and S. Singh. Between MDPs and semi-MDPs: A framework for
temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1):181?211, 1999.
[16] G. Theocharous and L. P. Kaelbling. Approximate planning in POMDPs with macro-actions.
Advances in Neural Processing Information Systems, 17, 2003.
[17] M. Toussaint, L. Charlin, and P. Poupart. Hierarchical POMDP controller optimization by
likelihood maximization. Proc. Conf. on Uncertainty in Artificial Intelligence, 2008.
[18] C.C. White. Procedures for the solution of a finite-horizon, partially observed, semi-Markov
optimization problem. Operations Research, 24(2):348?358, 1976.
9
| 4477 |@word aircraft:2 suitably:1 open:2 termination:6 hu:1 simulation:2 contraction:1 decomposition:1 citeseer:1 pick:1 dramatic:1 recursively:1 carry:1 bai:2 initial:8 denoting:1 current:5 si:17 yet:1 must:1 visibility:1 designed:2 maxv:1 greedy:4 prohibitive:1 selected:2 intelligence:8 xk:2 smith:1 core:1 short:1 meuleau:1 infrastructure:1 provides:2 contribute:1 node:15 location:16 five:1 along:6 constructed:3 consists:1 ra:4 expected:3 roughly:1 planning:27 examine:1 nor:1 bellman:1 discounted:3 decreasing:1 automatically:1 curse:2 solver:4 linearity:2 notation:1 underlying:1 pto:1 kind:1 interpreted:1 substantially:2 rmax:4 degrading:1 maxa:1 macroactions:1 backbone:1 hauskrecht:1 temporal:6 remember:1 every:5 auai:1 tackle:2 control:2 unit:2 grant:2 before:1 local:1 treat:1 consequence:1 theocharous:1 despite:1 sutton:1 ak:2 establishing:1 path:4 sivakumar:1 approximately:1 studied:1 fastest:1 hunt:1 contractive:1 range:2 directed:1 unique:1 practical:1 responsible:1 practice:4 union:1 implement:1 procedure:1 area:1 elicit:1 significantly:1 puma:8 pre:1 confidence:1 enclosure:1 argmaxa:1 quadrant:1 get:2 convenience:1 cannot:1 selection:1 operator:5 close:1 scheduling:1 map:6 demonstrated:1 dean:1 primitive:7 regardless:1 starting:3 duration:1 convex:5 pomdp:21 resolution:1 constrast:1 avoidance:2 swarm:1 handle:2 searching:1 underwater:7 autonomous:2 simmons:1 construction:4 hierarchy:4 gm:4 tan:1 exact:2 programming:1 us:9 roy:2 approximated:2 labeled:4 cooperative:1 observed:4 bottom:3 solved:2 hv:3 capture:5 thousand:1 sun:1 movement:2 ran:4 principled:1 substantial:1 mcvi:49 convexity:1 complexity:4 moderately:1 reward:18 ideally:1 environment:4 dynamic:3 renormalized:1 depend:1 carrying:1 solving:2 singh:1 deliver:1 localization:2 upon:1 completely:2 easily:1 joint:2 represented:4 train:1 fast:1 effective:5 monte:8 artificial:8 outcome:1 whose:2 heuristic:1 supplementary:1 solve:4 larger:1 ability:1 paquet:1 itself:2 online:16 hoc:5 advantage:1 gambit:1 rock:2 took:2 mission:1 macro:117 relevant:1 loop:4 scalability:1 exploiting:1 convergence:1 optimum:1 undiscounted:1 produce:2 converges:1 executing:1 help:2 derive:1 depending:1 illustrate:1 andrew:1 nearest:2 received:3 b0:4 progress:1 strong:1 implemented:1 entirety:1 involves:1 indicate:2 come:1 met:1 direction:4 closely:1 filter:2 kb:1 packet:1 enable:2 material:1 require:3 behaviour:2 generalization:2 decompose:1 kurniawati:2 hold:1 practically:1 around:2 considered:1 ground:4 mapping:4 algorithmic:2 substituting:1 major:4 relay:1 proc:9 label:2 hansen:2 visited:2 ross:1 coordination:1 largest:1 create:2 successfully:2 weighted:1 brought:1 clearly:1 mit:1 always:3 reaching:3 zhou:2 avoid:2 pn:3 varying:1 barto:1 probabilistically:1 conjunction:1 derived:1 focus:1 inherits:1 posmdp:12 longest:1 jnt:1 indicates:1 likelihood:1 seamlessly:1 contrast:2 baseline:2 abstraction:6 stopping:1 integrated:1 entire:1 initially:1 selects:2 mitigating:1 augment:1 plan:1 art:1 construct:5 once:1 having:1 aware:2 sampling:6 manually:1 unnecessarily:1 icml:1 wifi:1 report:1 t2:1 posmdps:2 piecewise:4 simplify:1 gordon:1 randomly:4 wee:1 simultaneously:1 national:1 composed:1 argmax:1 phase:3 maintain:1 navigation:6 predefined:2 poorer:1 tuple:1 edge:4 partial:1 shorter:1 draa:1 old:1 initialized:1 desired:2 catching:1 theoretical:5 stopped:2 instance:2 xeon:1 earlier:3 obstacle:2 modeling:1 cover:1 maximization:2 leslie:1 cost:2 kaelbling:3 subset:2 hundred:1 northeast:1 conducted:1 too:1 optimally:1 dependency:1 combined:1 twelve:1 retain:3 stay:5 lee:5 probabilistic:2 vm:8 destination:6 off:3 together:1 synthesis:1 precup:1 aaai:1 possibly:1 hoeffding:1 conf:7 leading:1 return:3 reusing:1 supb:1 north:7 int:5 explicitly:1 depends:3 vi:3 piece:1 performed:2 ad:5 vehicle:25 closed:2 doing:1 portion:1 start:5 reached:1 maintains:1 collaborative:4 oi:3 square:1 characteristic:1 generalize:1 mc:11 carlo:8 pomdps:20 history:7 reach:3 networking:1 involved:1 proof:5 associated:2 hsu:5 stop:1 sampled:3 chaib:1 lim:1 knowledge:2 improves:1 dimensionality:1 anytime:1 sophisticated:2 actually:2 ok:2 higher:3 tom:1 wei:1 improved:1 execute:1 sarsop:8 charlin:1 furthermore:3 just:1 implicit:1 stage:2 until:7 hand:1 incrementally:1 pineau:3 grows:1 effect:2 dietterich:1 lozano:1 discounting:1 hence:4 assigned:1 white:1 reweighted:7 branching:1 criterion:1 allowable:1 performs:2 motion:3 l1:1 belief2:1 passage:1 image:1 wise:1 recently:2 exponentially:1 volume:2 banach:1 interpretation:1 approximates:1 he:1 interpret:1 refer:1 grid:7 collaborate:1 similarly:3 particle:4 had:1 reachable:1 crocodile:11 robot:4 moving:1 longer:6 v0:1 add:1 base:2 navigates:3 closest:1 own:1 recent:2 scenario:1 store:1 server:1 arbitrarily:1 captured:2 seen:2 additional:2 r0:2 signal:3 semi:5 branch:1 desirable:1 reduces:1 match:3 long:7 escaped:2 beacon:3 post:1 uav:11 visit:3 va:12 controlled:1 variant:3 basic:1 controller:10 denominator:1 expectation:1 noiseless:1 iteration:12 represent:1 sometimes:1 disaster:1 achieved:1 robotics:5 addition:1 impassable:1 interval:2 appropriately:1 ot:2 rest:1 strict:1 south:7 call:1 ngo:1 near:1 mahadevan:1 easy:2 enough:2 hb:1 destroyed:1 automated:1 gave:1 bandwidth:1 competing:2 imperfect:1 idea:1 reduce:3 simplifies:1 observability:1 tradeoff:1 six:1 passed:2 penalty:1 action:110 repeatedly:1 deep:1 boutilier:1 useful:2 collision:2 latency:1 unimportant:1 amount:1 discount:3 reduced:3 generate:3 singapore:3 rescue:4 track:1 per:1 discrete:4 threat:1 group:2 four:3 localize:3 neither:1 kept:1 graph:19 sum:2 run:7 uncertainty:7 powerful:3 extends:1 reasonable:2 decide:1 decision:7 zhan:1 bound:8 centrally:1 auv:8 badly:2 mda:1 encodes:1 speed:1 simulate:2 optimality:1 extremely:1 vehicular:4 relatively:3 department:1 marking:5 poor:1 aerial:1 smaller:1 remain:1 em:1 terminates:2 making:2 taken:1 ln:6 equation:3 turn:1 fail:2 roam:2 know:4 kochenderfer:1 end:2 operation:5 apply:4 observe:1 hierarchical:6 away:1 simulating:1 alternative:1 original:2 top:3 running:3 include:2 k1:1 approximating:1 unchanged:1 move:15 g0:4 already:1 added:1 occurs:1 degrades:2 rt:3 dependence:1 navigating:1 distance:1 thank:1 thrun:2 nx:1 poupart:1 reason:1 furthest:1 assuming:2 length:6 retained:1 modeled:2 difficult:5 unfortunately:3 setup:1 potentially:1 negative:2 design:1 implementation:1 proper:1 policy:50 unknown:2 twenty:1 allowing:1 av:1 observation:22 markov:8 finite:13 situation:1 extended:2 communication:3 arbitrary:4 david:1 introduced:3 moe:1 unmanned:2 established:1 hour:1 maxq:1 usually:1 macroaction:4 below:1 challenge:2 max:6 memory:1 belief:38 tranforming:1 suitable:1 event:1 treated:1 rely:1 difficulty:1 improve:2 mdps:4 started:3 carried:1 created:1 faced:1 prior:1 acknowledgement:1 fully:3 multiagent:1 generation:1 interesting:1 limitation:1 vg:5 toussaint:1 foundation:2 agent:13 sufficient:3 systematically:1 repeat:1 last:4 keeping:1 free:1 supported:1 wireless:1 offline:1 side:1 allow:4 face:2 taking:2 correspondingly:1 absolute:1 benefit:2 ghz:1 transition:6 cumulative:1 computes:1 stuck:1 collection:1 made:1 reinforcement:4 simplified:1 sj:7 approximate:5 observable:10 logic:1 keep:1 confirm:3 robotic:4 xi:1 terrain:2 continuous:7 search:18 s0i:2 promising:1 robust:1 obtaining:1 du:1 complex:4 constructing:1 inherit:1 did:2 main:2 hierarchically:5 backup:20 border:4 noise:1 sridhar:1 allowed:2 fair:1 augmented:2 crafted:1 west:3 intel:1 scattered:1 deployed:2 sub:2 position:3 pv:1 explicit:2 wish:1 fails:2 brunskill:1 ix:1 rez:1 theorem:10 down:1 removing:1 xt:4 navigate:1 showing:1 insightful:1 decay:2 dominates:1 consist:1 workshop:2 adding:2 effectively:1 execution:2 horizon:13 intersection:1 simply:4 forming:1 infinitehorizon:1 partially:6 applies:1 springer:1 satisfies:1 conditional:2 goal:5 hsvi:1 towards:2 passable:1 lipschitz:1 included:1 infinite:1 determined:1 operates:2 reducing:1 lemma:2 called:4 total:3 experimental:2 east:9 formally:1 evaluate:1 avoiding:2 |
3,841 | 4,478 | Multi-Bandit Best Arm Identification
Victor Gabillon
Mohammad Ghavamzadeh
Alessandro Lazaric
INRIA Lille - Nord Europe, Team SequeL
{victor.gabillon,mohammad.ghavamzadeh,alessandro.lazaric}@inria.fr
S?ebastien Bubeck
Department of Operations Research and Financial Engineering, Princeton University
[email protected]
Abstract
We study the problem of identifying the best arm in each of the bandits in a multibandit multi-armed setting. We first propose an algorithm called Gap-based Exploration (GapE) that focuses on the arms whose mean is close to the mean of
the best arm in the same bandit (i.e., small gap). We then introduce an algorithm,
called GapE-V, which takes into account the variance of the arms in addition to
their gap. We prove an upper-bound on the probability of error for both algorithms. Since GapE and GapE-V need to tune an exploration parameter that depends on the complexity of the problem, which is often unknown in advance, we
also introduce variations of these algorithms that estimate this complexity online.
Finally, we evaluate the performance of these algorithms and compare them to
other allocation strategies on a number of synthetic problems.
1 Introduction
Consider a clinical problem with M subpopulations, in which one should decide between Km options for treating subjects from each subpopulation m. A subpopulation may correspond to patients
with a particular gene biomarker (or other risk categories) and the treatment options are the available
treatments for a disease. The main objective here is to construct a rule, which recommends the best
treatment for each of the subpopulations. These rules are usually constructed using data from clinical trials that are generally costly to run. Therefore, it is important to distribute the trial resources
wisely so that the devised rule yields a good performance. Since it may take significantly more
resources to find the best treatment for one subpopulation than for the others, the common strategy
of enrolling patients as they arrive may not yield an overall good performance. Moreover, applying
treatment options uniformly at random in a subpopulation could not only waste trial resources, but
also it might run the risk of finding a bad treatment for that subpopulation. This problem can be formulated as the best arm identification over M multi-armed bandits [1], which itself can be seen as
the problem of pure exploration [4] over multiple bandits. In this formulation, each subpopulation is
considered as a multi-armed bandit, each treatment as an arm, trying a medication on a patient as a
pull, and we are asked to recommend an arm for each bandit after a given number of pulls (budget).
The evaluation can be based on 1) the average over the bandits of the reward of the recommended
arms, or 2) the average probability of error (not selecting the best arm), or 3) the maximum probability of error. Note that this setting is different from the standard multi-armed bandit problem in
which the goal is to maximize the cumulative sum of rewards (see e.g., [13, 3]).
The pure exploration problem is about designing strategies that make the best use of the limited budget (e.g., the total number of patients that can be admitted to the clinical trial) in order to optimize the
performance in a decision-making task. Audibert et al. [1] proposed two algorithms to address this
problem: 1) a highly exploring strategy based on upper confidence bounds, called UCB-E, in which
the optimal value of its parameter depends on some measure of the complexity of the problem, and
2) a parameter-free method based on progressively rejecting the arms which seem to be suboptimal,
called Successive Rejects. They showed that both algorithms are nearly optimal since their probability of returning the wrong arm decreases exponentially at a rate. Racing algorithms (e.g., [10, 12])
1
and action-elimination algorithms [7] address this problem under a constraint on the accuracy in
identifying the best arm and they minimize the budget needed to achieve that accuracy. However,
UCB-E and Successive Rejects are designed for a single bandit problem, and as we will discuss later,
cannot be easily extended to the multi-bandit case studied in this paper. Deng et al. have recently
proposed an active learning algorithm for resource allocation over multiple bandits [5]. However,
they do not provide any theoretical analysis for their algorithm and only empirically evaluate its performance. Moreover, the target of their proposed algorithm is to minimize the maximum uncertainty
in estimating the value of the arms for each bandit. Note that this is different than our target, which
is to maximize the quality of the arms recommended for each bandit.
In this paper, we study the problem of best-arm identification in a multi-armed multi-bandit setting
under a fixed budget constraint, and propose an algorithm, called Gap-based Exploration (GapE), to
solve it. The allocation strategy implemented by GapE focuses on the gap of the arms, i.e., the difference between the mean of the arm and the mean of the best arm (in that bandit). The GapE-variance
(GapE-V) algorithm extends this approach taking into account also the variance of the arms. For
both algorithms, we prove an upper-bound on the probability of error that decreases exponentially
with the budget. Since both GapE and GapE-V need to tune an exploration parameter that depends
on the complexity of the problem, which is rarely known in advance, we also introduce their adaptive
version. Finally, we evaluate the performance of these algorithms and compare them with Uniform
and Uniform+UCB-E strategies on a number of synthetic problems. Our empirical results indicate
that 1) GapE and GapE-V have a better performance than Uniform and Uniform+UCB-E, and 2) the
adaptive version of these algorithms match the performance of their non-adaptive counterparts.
2 Problem Setup
In this section, we introduce the notation used throughout the paper and formalize the multi-bandit
best arm identification problem. Let M be the number of bandits and K be the number of arms for
each bandit (we use indices m, p, q for the bandits and k, i, j for the arms). Each arm k of a bandit
2
m is characterized by a distribution ?mk bounded in [0, b] with mean ?mk and variance ?mk
. In the
?
?
following, we assume that each bandit has a unique best arm. We denote by ?m and km the mean and
?
the index of the best arm of bandit m (i.e., ??m = max1?k?K ?mk , km
= arg max1?k?K ?mk ). In
each bandit m, we define the gap for each arm as ?mk = | maxj6=k ?mj ? ?mk |.
The clinical trial problem described in Sec. 1 can be formalized as a game between a stochastic multibandit environment and a forecaster, where the distributions {?mk } are unknown to the forecaster.
At each round t = 1, . . . , n, the forecaster pulls a bandit-arm pair I(t) = (m, k) and observes
a sample drawn from the distribution ?I(t) independent from the past. The forecaster estimates
the expected value of each arm by computing the average of the samples observed over time. Let
Tmk (t) be the number of times that arm k of bandit m has been pulled by the end of round t,
P mk (t)
then the mean of this arm is estimated as ?
bmk (t) = Tmk1 (t) Ts=1
Xmk (s), where Xmk (s) is the
s-th sample observed from ?mk . Given the previous definitions, we define the estimated gaps as
b mk (t) = | maxj6=k ?
?
bmj (t) ? ?
bmk (t)|. At the end of round n, the forecaster returns for each bandit
m the arm with the highest estimated mean, i.e., Jm (n) = arg maxk ?
bmk (n), and incurs a regret
r(n) =
M
M
1 X ?
1 X
rm (n) =
? ? ?mJm (n) .
M m=1
M m=1 m
As discussed in the introduction, other performance measures can be defined for this problem. In
some applications, returning the wrong arm is considered as an error independently from its regret,
and thus, the objective is to minimize the average probability of error
e(n) =
M
M
1 X
1 X
?
em (n) =
P Jm (n) 6= km
.
M m=1
M m=1
Finally, in problems similar to the clinical trial, a reasonable objective is to return the right treatment
for all the genetic profiles and not just to have a small average probability of error. In this case, the
global performance of the forecaster can be measured as
?
?(n) = max ?m (n) = max P Jm (n) 6= km
.
m
m
It is interesting to note the relationship between these three performance measures: minm ?m ?
e(n) ? Er(n) ? b?e(n) ? b??(n), where the expectation in the regret is w.r.t. the random samples.
As a result, any algorithm minimizing the worst case probability of error, ?(n), also controls the
average probability of error, e(n), and the simple regret Er(n). Note that the algorithms introduced
in this paper directly target the problem of minimizing ?(n).
2
Parameters: number of rounds n, exploration parameter a, maximum range b
b mk (0) = 0 for all bandit-arm pairs (m, k)
Initialize: Tmk (0) = 0, ?
for t = 1, 2, . . . , n do
q
a
b mk (t ? 1) + b
Compute Bmk (t) = ??
for all bandit-arm pairs (m, k)
Tmk (t?1)
Draw I(t) ? arg maxm,k Bmk (t)
Observe XI(t) TI(t) (t ? 1) + 1 ? ?I(t)
b mk (t) ?k of the selected bandit
Update TI(t) (t) = TI(t) (t ? 1) + 1 and ?
end for
Return Jm (n) ? arg maxk?{1,...,K} ?
bmk (n), ?m ? {1 . . . M }
Figure 1: The pseudo-code of the gap-based Exploration (GapE) algorithm.
3 The Gap-based Exploration Algorithm
Fig. 1 contains the pseudo-code of the gap-based exploration (GapE) algorithm. GapE flattens the
bandit-arm structure and reduces it to a single-bandit problem with M K arms. At each time step t,
the algorithm relies on the observations up to time t ? 1 to build an index Bmk (t) for each banditarm pair, and then selects the pair I(t) with the highest index. The index Bmk consists of two
terms. The first term is the negative of the estimated gap for arm k in bandit m. Similar to other
upper-confidence bound (UCB) methods [3], the second part is an exploration term which forces the
algorithm to pull arms that have been less explored. As a result, the algorithm tends to pull arms
with small estimated gap and small number of pulls. The exploration parameter a tunes the level
of exploration of the algorithm. As it is shown by the theoretical
of Sec. 3.1, if the time
P analysis
2
2
horizon n is known, a should be set to a = 94 n?K
,
where
H
=
b
/?
mk is the complexity of
m,k
H
the problem (see Sec. 3.1 for further discussion). Note that GapE differs from most standard bandit
strategies in the sense that the B-index for an arm depends explicitly on the statistics of the other
arms. This feature makes the analysis of this algorithm much more involved.
As we may notice from Fig. 1, GapE resembles the UCB-E algorithm [1] designed to solve the pure
exploration problem in the single-bandit setting. Nonetheless, the use of the negative estimated gap
b mk ) instead of the estimated mean (b
(??
?mk ) (used by UCB-E) is crucial in the multi-bandit setting.
?
In the single-bandit problem, since the best and second best arms have the same gap (?mkm
=
? ?mk ), GapE considers them equivalent and tends to pull them the same amount of time,
mink6=km
while UCB-E tends to pull the best arm more often than the second best one. Despite this difference,
the performance of both algorithms in predicting the best arm after n pulls would be the same. This is
due to the fact that the probability of error depends on the capability of the algorithm to distinguish
optimal and suboptimal arms, and this is not affected by a different allocation over the best and
second best arms as long as the number of pulls allocated to that pair is large enough w.r.t. their gap.
Despite this similarity, the two approaches become completely different in the multi-bandit case. In
this case, if we run UCB-E on all the M K arms, it tends to pull more the arm with the highest mean
over all the bandits, i.e., k ? = arg maxm,k ?mk . As a result, it would be accurate in predicting the
best arm k ? over bandits, but may have an arbitrarily bad performance in predicting the best arm for
each bandit, and thus, may incur a large error ?(n). On the other hand, GapE focuses on the arms
with the smallest gaps. This way, it assigns more pulls to bandits whose optimal arms are difficult
to identify (i.e., bandits with arms with small gaps), and as shown in the next section, it achieves a
high probability in identifying the best arm in each bandit.
3.1 Theoretical Analysis
In this section, we derive an upper-bound on the probability of error ?(n) for the GapE algorithm.
Theorem 1. If we run GapE with parameter 0 < a ? 49 n?MK
, then its probability of error satisfies
H
a
?
? 2M Kn exp(? ),
?(n) ? P ?m : Jm (n) 6= km
64
in particular for a =
4 n?MK
,
9
H
1 n?MK
we have ?(n) ? 2M Kn exp(? 144
).
H
Remark 1 (Analysis of the bound). If the time horizon n is known in advance, it would be possible
to set the exploration parameter a as a linear function of n, and as a result, the probability of error of
GapE decreases exponentially with the time horizon. The other interesting aspect of the bound is the
3
complexity term H appearing in the optimal value of the exploration parameter a (i.e., a = 94 n?K
H ).
2
2
If we denote by Hmk = b /?mk , the complexity of arm k in bandit m, it is clear from the definition
of H that each arm has an additive impact on the overall complexity of
problem.
Pthe2multi-bandit
2
b
/?
(similar
Moreover, if we define the complexity of each bandit m as Hm =
mk
k
P to the
definition of complexity for UCB-E in [1]), the GapE complexity may be rewritten as H = m Hm .
This means that the complexity of GapE is simply the sum of the complexities of all the bandits.
Remark 2 (Comparison with the static allocation strategy). The main objective of GapE is to
tradeoff between allocating pulls according to the gaps (more precisely, according to the complexities Hmk ) and the exploration needed to improve the accuracy of their estimates. If the gaps were
known in advance, a nearly-optimal static allocation strategy assigns to each bandit-arm pair a number of pulls proportional to its complexity. Let us consider a strategy that pulls each arm a fixed
number of times over the horizon n. The probability of error for this strategy may be bounded as
M
M
X
X
X
?
?
? (n) ? ?
P ?
?mkm
?mk (n)
?
P Jm (n) 6= km
?Static (n) ? P ?m : Jm (n) 6= km
?
?
m=1 k6=km
m=1
?
M
X
X
exp ? Tmk (n)
?
m=1 k6=km
M
X
X
?2mk
?1
exp ? Tmk (n)Hmk
.
=
2
b
m=1 k6=k?
(1)
m
P
Given the constraint
mk Tmk (n) = n, the allocation minimizing the last term in Eq. 1 is
?
Tmk
(n) = nHmk /H. We refer to this fixed strategy as StaticGap. Although this is not neces?
sarily the optimal static strategy (Tmk
(n) minimizes an upper-bound), this allocation guarantees
a probability of error smaller than M K exp(?n/H). Theorem 1 shows that, for n large enough,
GapE achieves the same performance as the static allocation StaticGap.
Remark 3 (Comparison with other allocation strategies). At the beginning of Sec. 3, we discussed the difference between GapE and UCB-E. Here we compare the bound reported in Theorem 1 with the performance of the Uniform and combined Uniform+UCB-E allocation strategies. In
the uniform allocation strategy, the total budget n is uniformly split over all the bandits and arms.
As a result, each bandit-arm pair is pulled Tmk (n) = n/(M K) times. Using the same derivation as
in Remark 2, the probability of error ?(n) for this strategy may be bounded as
?Unif (n) ?
M
X
X
exp ?
?
m=1 k6=km
n ?2mk
n
.
? M K exp ?
2
MK b
M K maxm,k Hmk
In the Uniform+UCB-E allocation strategy, i.e., a two-level algorithm that first selects a bandit
uniformly and
Pthen pulls arms within each bandit using UCB-E, the total number of pulls for each
bandit m is k Tmk (n) = n/M , while the number of pulls Tmk (n) over the arms in bandit m is
determined by UCB-E. Thus, the probability of error of this strategy may be bounded as
?Unif+UCB-E (n) ?
M
X
m=1
2nK exp
?
n/M ? K
n/M ? K
? 2nM K exp ?
,
18Hm
18 maxm Hm
P
where the first inequality follows from Theorem 1 in [1] (recall that Hm = k b2 /?2mk ). Let b = 1
(i.e., all the arms have distributions bounded in [0, 1]), up to constants and multiplicative factors in
front of the exponentials, and if n is large enough compared to M and K (so as to approximate
n/M ? K and n ? K by n), the probability of error for the three algorithms may be bounded as
?n/M K
?n
?n/M
?Unif (n) ? exp O
.
, ?GapE (n) ? exp O P
, ?U+UCBE (n) ? exp O
Hmk
max Hmk
max Hm
m
m,k
m,k
By comparing the arguments of P
the exponential
P terms, we have the trivial sequence of inequalities
M K maxm,k Hmk ? M maxm k Hmk ? m,k Hmk , which implies that the upper bound on the
probability of error of GapE is usually significantly smaller. This relationship, which is confirmed
by the experiments reported in Sec. 4, shows that GapE is able to adapt to the complexity H of
the overall multi-bandit problem better than the other two allocation strategies. In fact, while the
performance of the Uniform strategy depends on the most complex arm over the bandits and the
strategy Unif+UCB-E is affected by the most complex bandit, the performance of GapE depends on
the sum of the complexities of all the arms involved in the pure exploration problem.
4
Proof of Theorem 1. Step 1. Let us consider the following event:
E=
r
?m ? {1, . . . , M }, ?k ? {1, . . . , K}, ?t ? {1, . . . , n}, ?
bmk (t) ? ?mk < bc
a
Tmk (t)
.
From Chernoff-Hoeffding?s inequality and a union bound, we have P(?) ? 1?2M Kn exp(?2ac2 ).
Now we would like to prove that on the event E, we find the best arm for all the bandits, i.e., Jm (n) =
?
km
, ?m ? {1 . . . M }. Since Jm (n) is the empirical best arm of bandit m, we should prove that for
? (n). By upper-bounding the LHS and lower-bounding the
any k ? {1, . . . , K}, ?
bmk (n) ? ?
bmkm
p
RHS of this inequality, we note that it would be enough to prove bc a/Tmk (n) ? ?mk /2 on the
2 2
c
.
event E, or equivalently, to prove that for any bandit-arm pair m, k, we have Tmk (n) ? 4ab
?2
mk
Step 2. In this step, we show that in GapE, for any bandits (m, q) and arms (k, j), and for any
t ? M K, the following dependence between the number of pulls of the arms holds
r
r
a
a
? ??qj + (1 ? d)b
,
(2)
??mk + (1 + d)b
Tqj (t)
max Tmk (t) ? 1, 1
where d ? [0, 1]. We prove this inequality by induction.
Base step. We know that after the first M K rounds of the GapE algorithm, all the arms have been
pulled once, i.e., Tmk (t) = 1, ?m, k, thus if a ? 1/4d2 , the inequality (2) holds for t = M K.
Inductive step. Let us assume that (2) holds at time t ? 1 and we pull arm i of bandit p at time t,
i.e., I(t) = (p, i). So at time t, the inequality (2) trivially holds for every choice of m, q, k, and
j, except when (m, k) = (p, i). As a result, in the inductive step, we only need to prove that the
following holds for any q ? {1, . . . M } and j ? {1, . . . K}
r
r
a
a
??pi + (1 + d)b
? ??qj + (1 ? d)b
.
(3)
Tqj (t)
max Tpi (t) ? 1, 1
Since arm i of bandit p has been pulled at time t, we have that for any bandit-arm pair (q, j)
r
r
a
a
b qj (t ? 1) + b
b pi (t ? 1) + b
? ??
.
??
Tpi (t ? 1)
Tqj (t ? 1)
(4)
b pi (t ? 1) and a lower-bound for ??
b qj (t ? 1)
To prove (3), we first prove an upper-bound for ??
b pi (t?1) ? ??pi + 2bc
??
1?c
r
a
Tpi (t) ? 1
and
? r
2 2bc
a
b
? ?qj (t?1) ? ??qj ?
. (5)
1 ? d Tqj (t)
We report the proofs of the inequalities in (5) in App. B of [8]. The inequality (3), and as a result,
b pi (t ? 1) and ??
b qj (t ? 1) in (4) from (5) and under the
the inductive step is proved by replacing
??
?
?
2c
2c
conditions that d ? 1?c
and d ? 21?d
. These conditions are satisfied by d = 1/2 and c = 2/16.
Step 3. In order to prove the condition of Tmk (n) in step 1, we need to find a lower-bound on the
number of pulls of all the arms at time t = n (at the end). Let us assume that
q arm k of bandit m has
ab2 (1?d)2
been pulled less than ?2
, which indicates that ??mk + (1 ? d)b Tmka(n) > 0. From this
mk
q
2
(1+d)2
a
> 0, or equivalently Tqj (n) < ab ?
result and (2), we have ??qj + (1 + d)b Tqj (n)?1
+1
2
qj
P
for any pair (q, j). We also know that q,j Tqj (n) = n. From these, we deduce that n ? M K <
P
P
ab2 (1 + d)2 q,j ?12 . So, if we select a such that n ? M K ? ab2 (1 + d)2 q,j ?12 , we contradict
qj
2
qj
2 2
2
c
, which means that Tmk (n) ? 4ab
for any pair
the first assumption that Tmk (n) < ab ?(1?d)
2
?2mk
mk
(m, k), when 1 ? d ? 2c. This concludes the proof. The condition for a in the statement of the
theorem comes from our choice of a in this step and the values of c and d from the inductive step.
3.2 Extensions
In this section we propose two variants on the GapE algorithm with the objective of extending its
applicability and improving its performance.
5
GapE with variance (GapE-V). The allocation strategy implemented by GapE focuses only on the
arms with small gap and does not take into consideration their variance. However, it is clear that the
arms with small variance, even if their gap is small, just need a few pulls to be correctly estimated. In
order to take into account both the gaps and variances of the arms, we introduce the GapE-variance
PTmk (t) 2
1
2
(GapE-V) algorithm. Let ?
bmk
(t) = Tmk (t)?1
Xmk (s) ? ?
b2mk (t) be the estimated variance
s=1
for arm k of bandit m at the end of round t. GapE-V uses the following B-index for each arm:
b mk (t ? 1) +
Bmk (t) = ??
s
2
2a ?
bmk
(t ? 1)
7ab
.
+
Tmk (t ? 1)
3 Tmk (t ? 1) ? 1
Note that the exploration term in the B-index has now two components: the first one depends on the
empirical variance and the second one decreases as O(1/Tmk ). As a result, arms with low variance
will be explored much less than in the GapE algorithm. Similar to the difference between UCB [3]
and UCB-V [2], while the B-index in GapE is motivated by Hoeffding?s inequalities, the one for
GapE-V is obtained using an empirical Bernstein?s inequality [11, 2]. The following performance
bound can be proved for GapE-V algorithm. We report the proof of Theorem 2 in App. C of [8].
Theorem 2. If GapE-V is run with parameter 0 < a ?
8 n?2MK
,
9
H?
then it satisfies
9a
?
?(n) ? P ?m : Jm (n) 6= km ? 6nM K exp ?
64 ? 64
1 n?2MK
8 n?2MK
.
in particular for a = 9 H ? , we have ?(n) ? 6nM K exp ? 64?8 H ?
In Theorem 2, H ? is the complexity of the GapE-V algorithm and is defined as
H? =
K
M X
X
?mk +
m=1 k=1
p
2
?mk
+ (16/3)b?mk
?2mk
2
.
Although the variance-complexity H ? could be larger than the complexity H used in GapE, whenever the variances of the arms are small compared to the range b of the distribution, we expect H ? to
be smaller than H. Furthermore, if the arms have very different variances, then GapE-V is expected
to better capture the complexity of each arm and allocate the pulls accordingly. For instance, in the
case where all the gaps are the same, GapE tends to allocate pulls proportionally to the complexity Hmk and it would perform an almost uniform allocation over bandits and arms. On the other
hand, the variances of the arms could be very heterogeneous and GapE-V would adapt the allocation
strategy by pulling more often the arms whose values are more uncertain.
Adaptive GapE and GapE-V. A drawback of GapE and GapE-V is that the exploration parameter
a should be tuned according to the complexities H and H ? of the multi-bandit problem, which are
rarely known in advance. A straightforward solution to this issue is to move to an adaptive version
b and H
b ? . At each step t of
of these algorithms by substituting H and H ? with suitable estimates H
the adaptive GapE and GapE-V algorithms, we estimate these complexities as
b
H(t)
=
X
m,k
b2
,
UCB?i (t)2
b i (t ? 1) +
UCB?i (t) = ?
s
b ? (t) =
H
p
2
LCB?i (t)2 + (16/3)b ? UCB?i (t)
, where
UCB?i (t)2
s
2
bi (t ? 1) ?
LCB?i (t) = max 0, ?
.
Ti (t ? 1) ? 1
X LCB?i (t) +
m,k
1
2Ti (t ? 1)
and
b and H
b ? are lower-confidence bounds on the true
Similar to the adaptive version of UCB-E in [1], H
complexities H and H ? . Note that the GapE and GapE-V bounds written for the optimal value of
a indicate an inverse relation between the complexity and the exploration. By using a lower-bound
on the true H and H ? , the algorithms tend to explore arms more uniformly and this allows them to
increase the accuracy of their estimated complexities. Although we do not analyze these algorithms,
we empirically show in Sec. 4 that they are in fact able to match the performance of the GapE and
GapE-V algorithms.
4 Numerical Simulations
In this section, we report numerical simulations of the gap-based algorithms presented in this paper,
GapE and GapE-V, and their adaptive versions A-GapE and A-GapE-V, and compare them with Unif
6
8
16
32
2
4
8
16
1/8 1/4 1/2
Adapt GapE?V
0.20
0.15
0.20
4
GapE?V
0.25
GapE
Maximum probability of error
Adapt GapE
0.25
0.30
GapE
0.15
Maximum probability of error
Uniform + UCBE
1
8
Parameter ?
16
32
64
2
4
8
16
1/4 1/2
1
2
Parameter ?
Figure 2: (left) Problem 1: Comparison between GapE, adaptive GapE, and the uniform strategies.
(right) Problem 2: Comparison between GapE, GapE-V, and adaptive GapE-V algorithms.
Unif + UCBE?V Unif + A UCBE?V
1
2
GapE
A GapE
GapE?V
A GapE?V
0.25
0.35
0.45
Unif + A UCBE
0.15
Maximum probability of error
Unif + UCBE
4
8 16 32
2
4
8
4
8 16 1/4 1/2 1
2
4
8 16 32 1/4 1/2 1
2
1/2 1
2
4
1/4 1/2 1
2
Parameter ?
Figure 3: Performance of the algorithms in Problem 3.
and Unif+UCB-E algorithms introduced in Sec. 3.1. The results of our experiments both those in
the paper and those in App. A of [8] indicate that 1) GapE successfully adapts its allocation strategy
to the complexity of each bandit and outperforms the uniform allocation strategies, 2) the use of
the empirical variance in GapE-V can significantly improve the performance over GapE, and 3) the
adaptive versions of GapE and GapE-V that estimate the complexities H and H ? online attain the
same performance as the basic algorithms, which receive H and H ? as an input.
Experimental setting. We use the following three problems in our experiments. Note that b = 1
and that a Rademacher distribution with parameters (x, y) takes value x or y with probability 1/2.
? Problem 1. n = 700, M = 2, K = 4. The arms have Bernoulli distribution with parameters:
bandit 1 = (0.5, 0.45, 0.4, 0.3), bandit 2 = (0.5, 0.3, 0.2, 0.1).
? Problem 2. n = 1000, M = 2, K = 4. The arms have Rademacher distribution with
parameters (x, y): bandit 1 = {(0, 1.0), (0.45, 0.45), (0.25, 0.65), (0, 0.9)} and in bandit 2 =
{(0.4, 0.6), (0.45, 0.45), (0.35, 0.55), (0.25, 0.65)}.
? Problem 3. n = 1400, M = 4, K = 4. The arms have Rademacher distribution with parameters (x, y): bandit 1 = {(0, 1.0), (0.45, 0.45), (0.25, 0.65), (0, 0.9)}, bandit 2 = {(0.4, 0.6), (0.45, 0.45), (0.35, 0.55), (0.25, 0.65)}, bandit 3 = {(0, 1.0), (0.45, 0.45),
(0.25, 0.65), (0, 0.9)}, and bandit 4 = {(0.4, 0.6), (0.45, 0.45), (0.35, 0.55), (0.25, 0.65)}.
All the algorithms, except the uniform allocation, have an exploration parameter a. The theoretical
n
analysis suggests that a should be proportional to H
. Although a could be optimized according to the
n
,
bound, since the constants in the analysis are not accurate, we will run the algorithms with a = ? H
where ? is a parameter which is empirically tuned (in the experiments we report four different values
for ?). If H correctly defines the complexity of the exploration problem (i.e., the number of samples
to find the best arms with high probability), ? should simply correct the inaccuracy of the constants
in the analysis, and thus, the range of its nearly-optimal values should be constant across different
problems. In Unif+UCB-E, UCB-E is run with the budget of n/M and the same parameter ? for all
the bandits. Finally, we set n ? H ? , since we expect H ? to roughly capture the number of pulls
necessary to solve the pure exploration problem with high probability. In Figs. 2 and 3, we report
the performance l(n), i.e. the probability to identify the best arm in all the bandits after n rounds,
of the gap-based algorithms as well as Unif and Unif+UCB-E strategies. The results are averaged
7
over 105 runs and the error bars correspond to three times the estimated standard deviation. In all
the figures the performance of Unif is reported as a horizontal dashed line.
The left panel of Fig. 2 displays the performance of Unif+UCB-E, GapE, and A-GapE in Problem 1.
As expected, Unif+UCB-E has a better performance (23.9% probability of error) than Unif (29.4%
probability of error), since it adapts the allocation within each bandit so as to pull more often the
nearly-optimal arms. However, the two bandit problems are not equally difficult. In fact, their
complexities are very different (H1 ? 925 and H2 ? 67), and thus, much less samples are needed
to identify the best arm in the second bandit than in the first one. Unlike Unif+UCB-E, GapE
adapts its allocation strategy to the complexities of the bandits (on average only 19% of the pulls are
allocated to the second bandit), and at the same time to the arm complexities within each bandit (in
the first bandit the averaged allocation of GapE is (37%, 36%, 20%, 7%)). As a result, GapE has a
probability of error of 15.7%, which represents a significant improvement over Unif+UCB-E.
The right panel of Fig. 2 compares the performance of GapE, GapE-V, and A-GapE-V in Problem 2.
In this problem, all the gaps are equals (?mk = 0.05), thus all the arms (and bandits) have the same
complexity Hmk = 400. As a result, GapE tends to implement a nearly uniform allocation, which
results in a small difference between Unif and GapE (28% and 25% accuracy, respectively). The
reason why GapE is still able to improve over Unif may be explained by the difference between static
and dynamic allocation strategies and it is further investigated in App. A of [8]. Unlike the gaps,
the variance of the arms is extremely heterogeneous. In fact, the variance of the arms of bandit 1 is
bigger than in bandit 2, thus making it harder to solve. This difference is captured by the definition
of H ? (H1? ? 1400 > H2? ? 600). Note also that H ? ? H. As discussed in Sec. 3.2, since
GapE-V takes into account the empirical variance of the arms, it is able to adapt to the complexity
?
Hmk
of each bandit-arm pair and to focus more on uncertain arms. GapE-V improves the final
accuracy by almost 10% w.r.t. GapE. From both panels of Fig. 2, we also notice that the adaptive
algorithms achieve similar performance to their non-adaptive counterparts. Finally, we notice that
a good choice of parameter ? for GapE-V is always close to 2 and 4 (see also [8] for additional
experiments), while GapE needs ? to be tuned more carefully, particularly in Problem 2 where the
large values of ? try to compensate the fact that H does not successfully capture the real complexity
of the problem. This further strengthens the intuition that H ? is a more accurate measure of the
complexity for the multi-bandit pure exploration problem.
While Problems 1 and 2 are relatively simple, we report the results of the more complicated Problem 3 in Fig. 3. The experiment is designed so that the complexity w.r.t. the variance of each bandit
and within each bandit is strongly heterogeneous. In this experiment, we also introduce UCBE-V
that extends UCB-E by taking into account the empirical variance similarly to GapE-V. The results confirm the previous findings and show the improvement achieved by introducing empirical
estimates of the variance and allocating non-uniformly over bandits.
5 Conclusion
In this paper, we studied the problem of best arm identification in a multi-bandit multi-armed setting.
We introduced a gap-based exploration algorithm, called GapE, and proved an upper-bound for its
probability of error. We extended the basic algorithm to also consider the variance of the arms and
proved an upper-bound for its probability of error. We also introduced adaptive versions of these
algorithms that estimate the complexity of the problem online. The numerical simulations confirmed
the theoretical findings that GapE and GapE-V outperform other allocation strategies, and that their
adaptive counterparts are able to estimate the complexity without worsening the global performance.
Although GapE does not know the gaps, the experimental results reported in [8] indicate that it
might outperform a static allocation strategy, which knows the gaps in advance, thus suggesting
that an adaptive strategy could perform better than a static one. This observation asks for further
investigation. Moreover, we plan to apply the algorithms introduced in this paper to the problem of
rollout allocation for classification-based policy iteration in reinforcement learning [9, 6], where the
goal is to identify the greedy action (arm) in each of the states (bandit) in a training set.
Acknowledgments Experiments presented in this paper were carried out using the Grid?5000 experimental testbed (https://www.grid5000.fr). This work was supported by Ministry of Higher Education and Research, Nord-Pas de Calais Regional Council and FEDER through the ?contrat de projets e? tat region 2007?2013?, French National Research Agency (ANR) under project LAMPADA
n? ANR-09-EMER-007, European Community?s Seventh Framework Programme (FP7/2007-2013)
under grant agreement n? 231495, and PASCAL2 European Network of Excellence.
8
References
[1] J.-Y. Audibert, S. Bubeck, and R. Munos. Best arm identification in multi-armed bandits. In
Proceedings of the Twenty-Third Annual Conference on Learning Theory, pages 41?53, 2010.
[2] Jean-Yves Audibert, R?emi Munos, and Csaba Szepesv?ari. Tuning bandit algorithms in stochastic environments. In Marcus Hutter, Rocco Servedio, and Eiji Takimoto, editors, Algorithmic Learning Theory, volume 4754 of Lecture Notes in Computer Science, pages 150?165.
Springer Berlin / Heidelberg, 2007.
[3] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multi-armed bandit problem. Machine Learning, 47:235?256, 2002.
[4] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandit problems. In
Proceedings of the Twentieth International Conference on Algorithmic Learning Theory, pages
23?37, 2009.
[5] K. Deng, J. Pineau, and S. Murphy. Active learning for personalizing treatment. In IEEE
Symposium on Adaptive Dynamic Programming and Reinforcement Learning, 2011.
[6] C. Dimitrakakis and M. Lagoudakis. Rollout sampling approximate policy iteration. Machine
Learning Journal, 72(3):157?171, 2008.
[7] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions
for the multi-armed bandit and reinforcement learning problems. Journal of Machine Learning
Research, 7:1079?1105, 2006.
[8] V. Gabillon, M. Ghavamzadeh, A. Lazaric, and S. Bubeck. Multi-bandit best arm identification.
Technical Report 00632523, INRIA, 2011.
[9] M. Lagoudakis and R. Parr. Reinforcement learning as classification: Leveraging modern
classifiers. In Proceedings of the Twentieth International Conference on Machine Learning,
pages 424?431, 2003.
[10] O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In Proceedings of Advances in Neural Information Processing Systems 6, 1993.
[11] A. Maurer and M. Pontil. Empirical bernstein bounds and sample-variance penalization. In
22th annual conference on learning theory, 2009.
[12] V. Mnih, Cs. Szepesv?ari, and J.-Y. Audibert. Empirical Bernstein stopping. In Proceedings of
the Twenty-Fifth International Conference on Machine Learning, pages 672?679, 2008.
[13] H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American
Mathematics Society, 58:527?535, 1952.
9
| 4478 |@word trial:6 version:7 unif:21 d2:1 km:14 simulation:3 forecaster:6 tat:1 asks:1 incurs:1 harder:1 contains:1 selecting:1 genetic:1 bc:4 tuned:3 past:1 outperforms:1 comparing:1 worsening:1 written:1 additive:1 numerical:3 xmk:3 treating:1 designed:3 progressively:1 update:1 greedy:1 selected:1 accordingly:1 beginning:1 mannor:1 successive:2 rollout:2 constructed:1 become:1 symposium:1 prove:11 consists:1 introduce:6 excellence:1 expected:3 roughly:1 multi:21 armed:10 jm:10 project:1 estimating:1 moreover:4 notation:1 bounded:6 panel:3 minimizes:1 finding:3 csaba:1 guarantee:1 pseudo:2 every:1 ti:5 mkm:2 returning:2 wrong:2 rm:1 classifier:1 control:1 grant:1 engineering:1 enrolling:1 tends:6 despite:2 inria:3 might:2 studied:2 resembles:1 suggests:1 limited:1 range:3 bi:1 averaged:2 unique:1 acknowledgment:1 union:1 regret:4 implement:1 differs:1 pontil:1 empirical:10 significantly:3 reject:2 attain:1 confidence:3 lcb:3 subpopulation:8 cannot:1 close:2 selection:1 risk:2 applying:1 optimize:1 equivalent:1 www:1 straightforward:1 independently:1 formalized:1 identifying:3 assigns:2 pure:7 rule:3 pull:27 financial:1 variation:1 target:3 yishay:1 programming:1 us:1 designing:1 agreement:1 pa:1 strengthens:1 particularly:1 racing:1 observed:2 capture:3 worst:1 region:1 decrease:4 highest:3 observes:1 hmk:12 alessandro:2 disease:1 environment:2 intuition:1 complexity:40 agency:1 reward:2 asked:1 dynamic:2 ghavamzadeh:3 contrat:1 incur:1 max1:2 completely:1 easily:1 derivation:1 whose:3 jean:1 larger:1 solve:4 anr:2 statistic:1 fischer:1 itself:1 final:1 online:3 sequence:1 tpi:3 propose:3 fr:2 achieve:2 adapts:3 extending:1 rademacher:3 derive:1 measured:1 eq:1 implemented:2 c:1 indicate:4 implies:1 come:1 drawback:1 correct:1 stochastic:2 exploration:27 elimination:2 education:1 investigation:1 exploring:1 extension:1 hold:5 considered:2 exp:15 algorithmic:2 parr:1 substituting:1 achieves:2 smallest:1 calais:1 council:1 robbins:1 maxm:6 successfully:2 always:1 focus:5 improvement:2 bernoulli:1 biomarker:1 indicates:1 medication:1 sense:1 stopping:2 bandit:103 relation:1 selects:2 overall:3 arg:5 issue:1 classification:3 k6:4 plan:1 initialize:1 equal:1 construct:1 once:1 sampling:1 chernoff:1 represents:1 lille:1 nearly:5 others:1 recommend:1 report:7 ac2:1 few:1 modern:1 national:1 murphy:1 ab:5 highly:1 mnih:1 evaluation:1 accurate:3 allocating:2 ab2:3 necessary:1 lh:1 stoltz:1 maurer:1 theoretical:5 mk:47 uncertain:2 instance:1 hutter:1 applicability:1 introducing:1 deviation:1 uniform:15 seventh:1 front:1 reported:4 kn:3 synthetic:2 combined:1 international:3 sequel:1 gabillon:3 nm:3 satisfied:1 cesa:1 hoeffding:3 american:1 return:3 account:5 distribute:1 bmj:1 suggesting:1 de:2 sec:8 waste:1 b2:2 explicitly:1 audibert:4 depends:8 race:1 later:1 h1:2 try:1 multiplicative:1 eyal:1 analyze:1 option:3 capability:1 complicated:1 minimize:3 yves:1 accuracy:6 variance:25 correspond:2 yield:2 identify:4 identification:7 rejecting:1 confirmed:2 app:4 minm:1 whenever:1 definition:4 servedio:1 nonetheless:1 involved:2 proof:4 static:8 proved:4 treatment:9 recall:1 improves:1 formalize:1 carefully:1 auer:1 higher:1 formulation:1 strongly:1 furthermore:1 just:2 hand:2 horizontal:1 replacing:1 french:1 defines:1 pineau:1 maron:1 quality:1 pulling:1 true:2 counterpart:3 inductive:4 moore:1 round:7 game:1 trying:1 mohammad:2 consideration:1 personalizing:1 recently:1 ari:2 lagoudakis:2 common:1 tqj:7 empirically:3 exponentially:3 volume:1 discussed:3 refer:1 significant:1 tuning:1 trivially:1 grid:1 similarly:1 mathematics:1 maxj6:2 europe:1 similarity:1 deduce:1 base:1 showed:1 inequality:11 arbitrarily:1 victor:2 seen:1 captured:1 additional:1 ministry:1 deng:2 maximize:2 recommended:2 dashed:1 multiple:2 reduces:1 technical:1 match:2 characterized:1 adapt:5 clinical:5 long:1 compensate:1 devised:1 equally:1 bigger:1 impact:1 variant:1 basic:2 heterogeneous:3 patient:4 expectation:1 iteration:2 achieved:1 receive:1 addition:1 szepesv:2 crucial:1 allocated:2 unlike:2 regional:1 subject:1 tend:1 shie:1 leveraging:1 seem:1 bernstein:3 split:1 recommends:1 enough:4 suboptimal:2 bmk:13 tradeoff:1 qj:11 motivated:1 allocate:2 feder:1 accelerating:1 action:3 remark:4 dar:1 generally:1 clear:2 proportionally:1 tune:3 amount:1 eiji:1 category:1 http:1 outperform:2 wisely:1 notice:3 estimated:11 lazaric:3 correctly:2 affected:2 four:1 drawn:1 takimoto:1 sum:3 dimitrakakis:1 run:8 inverse:1 uncertainty:1 arrive:1 throughout:1 extends:2 decide:1 reasonable:1 almost:2 draw:1 decision:1 bound:22 distinguish:1 display:1 annual:2 constraint:3 precisely:1 aspect:2 emi:1 argument:1 extremely:1 relatively:1 department:1 according:4 smaller:3 across:1 em:1 making:2 explained:1 resource:4 discus:1 needed:3 know:4 fp7:1 end:5 available:1 operation:1 tmk:23 rewritten:1 apply:1 observe:1 mjm:1 appearing:1 build:1 society:1 objective:5 move:1 flattens:1 strategy:33 costly:1 dependence:1 rocco:1 berlin:1 considers:1 trivial:1 reason:1 induction:1 marcus:1 code:2 index:9 relationship:2 minimizing:3 equivalently:2 setup:1 difficult:2 statement:1 nord:2 negative:2 design:1 ebastien:1 policy:2 unknown:2 perform:2 twenty:2 upper:11 bianchi:1 observation:2 finite:1 t:1 projets:1 maxk:2 extended:2 team:1 emer:1 mansour:1 community:1 princeton:2 introduced:5 lampada:1 pair:13 neces:1 optimized:1 testbed:1 inaccuracy:1 address:2 able:5 bar:1 usually:2 max:7 pascal2:1 event:3 suitable:1 force:1 predicting:3 arm:109 improve:3 concludes:1 carried:1 hm:6 expect:2 lecture:1 interesting:2 allocation:28 proportional:2 penalization:1 h2:2 editor:1 pi:6 supported:1 last:1 free:1 pulled:5 taking:2 bulletin:1 munos:3 fifth:1 cumulative:1 adaptive:17 reinforcement:4 programme:1 approximate:2 contradict:1 gene:1 confirm:1 global:2 active:2 xi:1 search:1 why:1 mj:1 improving:1 heidelberg:1 investigated:1 complex:2 european:2 main:2 rh:1 bounding:2 profile:1 fig:7 pthen:1 exponential:2 third:1 theorem:9 bad:2 er:2 explored:2 sequential:1 budget:7 horizon:4 nk:1 gap:30 admitted:1 simply:2 explore:1 bubeck:4 twentieth:2 springer:1 satisfies:2 relies:1 goal:2 formulated:1 sarily:1 determined:1 except:2 uniformly:5 called:6 total:3 experimental:3 ucb:33 rarely:2 select:1 evaluate:3 sbubeck:1 |
3,842 | 4,479 | MAP Inference for
Bayesian Inverse Reinforcement Learning
Jaedeug Choi and Kee-Eung Kim
bDepartment of Computer Science
Korea Advanced Institute of Science and Technology
Daejeon 305-701, Korea
[email protected], [email protected]
Abstract
The difficulty in inverse reinforcement learning (IRL) arises in choosing the best
reward function since there are typically an infinite number of reward functions
that yield the given behaviour data as optimal. Using a Bayesian framework, we
address this challenge by using the maximum a posteriori (MAP) estimation for
the reward function, and show that most of the previous IRL algorithms can be
modeled into our framework. We also present a gradient method for the MAP estimation based on the (sub)differentiability of the posterior distribution. We show
the effectiveness of our approach by comparing the performance of the proposed
method to those of the previous algorithms.
1
Introduction
The objective of inverse reinforcement learning (IRL) is to determine the decision making agent?s
underlying reward function from its behaviour data and the model of environment [1]. The significance of IRL has emerged from problems in diverse research areas. In animal and human behaviour
studies [2], the agent?s behaviour could be understood by the reward function since the reward function reflects the agent?s objectives and preferences. In robotics [3], IRL provides a framework for
making robots learn to imitate the demonstrator?s behaviour using the inferred reward function.
In other areas related to reinforcement learning, such as neuroscience [4] and economics [5], IRL
addresses the non-trivial problem of finding an appropriate reward function when building a computational model for decision making.
In IRL, we generally assume that the agent is an expert in the problem domain and hence it behaves optimally in the environment. Using the Markov decision process (MDP) formalism, the IRL
problem is defined as finding the reward function that the expert is optimizing given the behaviour
data of state-action histories and the environment model of state transition probabilities. In the last
decade, a number of studies have addressed IRL in a direct (reward learning) and indirect (policy
learning by inferring the reward function, i.e., apprenticeship learning) fashions. Ng and Russell [6]
proposed a sufficient and necessary condition on the reward functions that guarantees the optimality
of the expert?s policy and formulated a linear programming (LP) problem to find the reward function from the behaviour data. Extending their work, Abbeel and Ng [7] presented an algorithm for
finding the expert?s policy from its behaviour data with a performance guarantee on the learned policy. Ratliff et al. [8] applied the structured max-margin optimization to IRL and proposed a method
for finding the reward function that maximizes the margin between the expert?s policy and all other
policies. Neu and Szepesv?ari [9] provided an algorithm for finding the policy that minimizes the
deviation from the behaviour. Their algorithm unifies the direct method that minimizes a loss function of the deviation and the indirect method that finds an optimal policy from the learned reward
function using IRL. Syed and Schapire [10] proposed a method to find a policy that improves the
expert?s policy using a game-theoretic framework. Ziebart et al. [11] adopted the principle of the
1
maximum entropy for learning the policy whose feature expectations are constrained to match those
of the expert?s behaviour. In addition, Neu and Szepesv?ari [12] provided a (non-Bayesian) unified
view for comparing the similarities and differences among previous IRL algorithms.
IRL is an inherently ill-posed problem since there may be an infinite number of reward functions
that yield the expert?s policy as optimal. Previous approaches summarized above employ various
preferences on the reward function to address the non-uniqueness. For example, Ng and Russell [6]
search for the reward function that maximizes the difference in the values of the expert?s policy and
the second best policy. More recently, Ramachandran and Amir [13] presented a Bayesian approach
formulating the reward preference as the prior and the behaviour compatibility as the likelihood, and
proposed a Markov chain Monte Carlo (MCMC) algorithm to find the posterior mean of the reward
function.
In this paper, we propose a Bayesian framework subsuming most of the non-Bayesian IRL algorithms in the literature. This is achieved by searching for the maximum-a-posteriori (MAP) reward
function, in contrast to computing the posterior mean. We show that the posterior mean can be problematic for the reward inference since the loss function is integrated over the entire reward space,
even including those inconsistent with the behaviour data. Hence, the inferred reward function can
induce a policy much different from the expert?s policy. The MAP estimate, however, is more robust in the sense that the objective function (the posterior probability in our case) is evaluated on
a single reward function. In order to find the MAP reward function, we present a gradient method
using the differentiability result of the posterior, and show the effectiveness of our approach through
experiments.
2
2.1
Preliminaries
MDPs
A Markov decision process (MDP) is defined as a tuple hS, A, T, R, ?, ?i: S is the finite set of
states; A is the finite set of actions; T is the state transition function where T (s, a, s? ) denotes the
probability P (s? |s, a) of changing to state s? from state s by taking action a; R is the reward function
where R(s, a) denotes the immediate reward of executing action a in state s, whose absolute value
is bounded by Rmax ; ? ? [0, 1) is the discount factor; ? is the initial state distribution where
?(s) denotes the probability of starting in state s. Using matrix notations, the transition function is
denoted as an |S||A| ? |S| matrix T , and the reward function is denoted as an |S||A|-dimensional
vector R.
A policy is defined as a mapping ? : S ? A. The value
policy ? is the expected discounted
Pof
?
return of executing the policy and defined as V ? = E [ t=0 ? t R(st , at )|?, ?] where the initial
state s0 is determined according to initial state distribution ? and action at is chosen by policy ?
inP
state st . The value function of policy ? for each state s is computed by V ? (s) =
+
PR(s, ?(s))
?
?
? s? ?S T (s, ?(s), s? )V ? (s? ) such that the value of policy ? is calculated
by
V
=
?(s)V
(s).
s
P
Similarly, the Q-function is defined as Q? (s, a) = R(s, a) + ? s? ?S T (s, a, s? )V ? (s? ). We can
rewrite the equations for the value function and the Q-function in matrix notations as
V ? = R? + ?T ? V ? ,
Q?a = Ra + ?T a V ?
(1)
where T ? is an |S| ? |S| matrix with the (s, s? ) element being T (s, ?(s), s? ), T a is an |S| ? |S|
matrix with the (s, s? ) element being T (s, a, s? ), R? is an |S|-dimensional vector with the s-th
element being R(s, ?(s)), Ra is an |S|-dimensional vector with the s-th element being R(s, a), and
Q?a is an |S|-dimensional vector with the s-th element being Q? (s, a).
An optimal policy ? ? maximizes the value function for all the states, and thus should satisfy
the Bellman optimality equation: ? is an optimal policy if and only if for all s ? S, ?(s) ?
?
?
argmaxa?A Q? (s, a). We denote V ? = V ? and Q? = Q? .
When the state space is large, the reward function is often linearly parameterized: R(s, a) =
Pd
i=1 wi ?i (s, a) with the basis functions ?i : S ? A ? R and the weight vector w =
[w1P
, w2 , ? ? ? , wd ]. Each basis function ?i has a corresponding basis value Vi? of policy ? : Vi? =
?
E [ t=0 ? t ?i (st , at )|?, ?].
2
We also assume that the expert?s behaviour is given as the set X of M trajectories executed by
the expert?s policy ?E , where the m-th trajectory is an H-step sequence of state-action pairs:
m
m m
m m
{(sm
1 , a1 ), (s2 , a2 ), ? ? ? , (sH , aH )}. Given the set of trajectories, the value and the basis value
of the expert?s policy ?E can be empirically estimated by
PM PH
PM PH
1
1
h?1
h?1
m
m
V? E = M
R(sm
?i (sm
V?iE = M
h , ah ),
h , ah ).
m=1
h=1 ?
m=1
h=1 ?
In addition, we can empirically estimate the expert?s policy ?
?E and its state visitation frequency ?
?E
from the trajectories:
PM PH
m
m
1 PM PH
m=1
h=1 1(sh =s?ah =a)
?
?E (s, a) =
1(sm =s) .
, ?
?E (s) =
PM PH
M
H m=1 h=1 h
m
m=1
h=1 1(sh =s)
In the rest of the paper, we use the notation f (R) or f (x; R) for function f in order to be explicit
that f is computed using reward function R. For example, the value function V ? (s; R) denotes the
value of policy ? for state s using reward function R.
2.2
Reward Optimality Condition
Ng and Russell [6] presented a necessary and sufficient condition for reward function R of an MDP
to guarantee the optimality of policy ?: Q?a (R) ? V ? (R) for all a ? A. From the condition,
we obtain the following corollary (although it is a succinct reformulation of the theorem in [6], the
proof is provided in the supplementary material).
Corollary 1 Given an MDP\R hS, A, T, ?, ?i, policy ? is optimal if and only if reward function R
satisfies
h
i
I ? (I A ? ?T )(I ? ?T ? )?1 E ? R ? 0,
(2)
where E ? is an |S| ? |S||A| matrix with the (s, (s? , a? )) element being 1 if s = s? and ?(s? ) = a? ,
and I A is an |S||A| ? |S| matrix constructed by stacking the |S| ? |S| identity matrix |A| times.
We refer to Equation (2) as the reward optimality condition w.r.t. policy ?. Since the linear inequalities define the region of the reward functions that yield policy ? as optimal, we refer to the
region bounded by Equation (2) as the reward optimality region w.r.t. policy ?. Note that there exist infinitely many reward functions in the reward optimality region even including constant reward
functions (e.g. R = c1 where c ? [?Rmax , Rmax ]). In other words, even when we are presented
with the expert?s policy, there are infinitely many reward functions to choose from, including the degenerate ones. To resolve this non-uniqueness in solutions, IRL algorithms in the literature employ
various preferences on reward functions.
2.3
Bayesian framework for IRL (BIRL)
Ramachandran and Amir [13] proposed a Bayesian framework for IRL by encoding the reward
function preference as the prior and the optimality confidence of the behaviour data as the likelihood.
We refer to their work as BIRL.
Assuming the rewards are i.i.d., the prior in BIRL is computed by
Q
P (R) = s?S,a?A P (R(s, a)).
(3)
Various distributions can be used as the prior. For example, the uniform prior can be used if we have
no knowledge about the reward function other than its range, and a Gaussian or a Laplacian prior
can be used if we prefer rewards to be close to some specific values.
The likelihood in BIRL is defined as an independent exponential distribution analogous to the softmax function:
P (X |R) =
H
M Y
Y
m=1 h=1
m
P (am
h |sh , R) =
H
M Y
Y
m=1 h=1
3
m
exp(?Q? (sm
h , ah ; R))
P
? m
a?A exp(?Q (sh , a; R))
(4)
P(R(s1 ), R(s5 )|X )
0.04
0.02
0
1
0.5
R(s5 )
(a)
0
0
0.2
0.4
0.6
0.8
1
R(s1 )
(b)
Figure 1: (a) 5-state chain MDP. (b) Posterior for R(s1 ) and R(s5 ) of the 5-state chain MDP.
where ? is a parameter that is equivalent to the inverse of temperature in the Boltzmann distribution.
The posterior over the reward function is then formulated by combining the prior and the likelihood,
using Bayes theorem:
P (R|X ) ? P (X |R)P (R).
(5)
BIRL uses a Markov chain Monte Carlo (MCMC) algorithm to compute the posterior mean of the
reward function.
3
MAP Inference in Bayesian IRL
In the Bayesian approach to IRL, the reward function can be determined using different estimates,
such as the posterior mean, median, or maximum-a-posterior (MAP). The posterior mean is commonly used since it can be shown to be optimal under the mean square error function. However,
the problem with the posterior mean in Bayesian IRL is that the error is integrated over the entire
space of reward functions, even including infinitely many rewards that induce policies inconsistent
with the behaviour data. This can yield a posterior mean reward function with an optimal policy
again inconsistent with the data. On the other hand, the MAP does not involve an objective function
that is integrated over the reward function space; it is simply a point that maximizes the posterior
probability. Hence, it is more robust to infinitely many inconsistent reward functions. We present a
simple example that compares the posterior mean and the MAP reward function estimation.
Consider an MDP with 5 states arranged in a chain, 2 actions, and the discount factor 0.9. As shown
in Figure 1(a), we denote the leftmost state as s1 and the rightmost state as s5 . Action a1 moves to
the state on the right with probability 0.6 and to the state on the left with probability 0.4. Action a2
always moves to state s1 . The true reward of each state is [0.1, 0, 0, 0, 1], hence the optimal policy
chooses a1 in every state. Suppose that we already know R(s2 ), R(s3 ), and R(s4 ) which are all 0,
and estimate R(s1 ) and R(s5 ) from the behaviour data X which contains optimal actions for all the
states. We can compute the posterior P (R(s1 ), R(s5 )|X ) using Equations (3), (4), and (5) under the
assumption that 0 ? R ? 1 and priors P (R(s1 )) being N (0.1, 1), and P (R(s5 )) being N (1, 1).
The reward optimality region can be also computed using Equation (2).
Figure 1(b) presents the posterior distribution of the reward function. The true reward, the MAP
reward, and the posterior mean reward are marked with the black star, the blue circle, and the red
cross, respectively. The black solid line is the boundary of the reward optimality region. Although
the prior mean is set to the true reward, the posterior mean is outside the reward optimality region.
An optimal policy for the posterior mean reward function chooses action a2 rather than action a1
in state s1 , while an optimal policy for the MAP reward function is identical to the true one. The
situation gets worse when using the uniform prior. An optimal policy for the posterior mean reward
function chooses action a2 in states s1 and s2 , while an optimal policy for the MAP reward function
is again identical to the true one.
In the rest of this section, we additionally show that most of the IRL algorithms in the literature can
be cast as searching for the MAP reward function in Bayesian IRL. The main insight comes from
the fact that these algorithms try to optimize an objective function consisting of a regularization term
for the preference on the reward function and an assessment term for the compatibility of the reward
function with the behaviour data. The objective function is naturally formulated as the posterior in
a Bayesian framework by encoding the regularization into the prior and the data compatibility into
the likelihood. In order to subsume different approaches used in the literature, we generalize the
4
Table 1: IRL algorithms and their equivalent f (X ; R) and prior for the Bayesian formulation. q ?
{1, 2} is for representing L1 or L2 slack penalties.
Previous algorithm
f (X ; R)
Prior
Ng and Russell?s IRL from sampled trajectories [6]
MMP without the loss function [8]
MWAL [10]
Policy matching [9]
MaxEnt [11]
fV
(fV )q
fG
fJ
fE
Uniform
Gaussian
Uniform
Uniform
Uniform
likelihood in Equation (4) to the following:
P (X |R) ? exp(?f (X ; R))
where ? is a parameter for scaling the likelihood and f (X ; R) is a function which will be defined
appropriately to encode the data compatibility assessment used in each IRL algorithm. We then have
the following result (the proof is provided in the supplementary material):
Theorem 1 IRL algorithms listed in Table 1 are equivalent to computing the MAP estimates with
the prior and the likelihood using f (X ; R) defined as follows:
h ?
i
? (R)
? V?iE
? fV (X ; R) = V? E (R) ? V ? (R)
? fG (X ; R) = mini Vi
P
2
? fJ (X ; R) = ? s,a ?
?E (s) (J(s, a; R) ? ?
?E (s, a))
? fE (X ; R) = log PMaxEnt (X |T , R)
where ? ? (R) is an optimal policy induced by the reward function R, J(s, a; R) is a smooth
mapping from reward function R to a greedy policy such as the soft-max function, and PMaxEnt
is the distribution on the behaviour data (trajectory or path) satisfying the principle of maximum
entropy.
The MAP estimation approach provides a rich framework for explaining the previous non-Bayesian
IRL algorithms in a unified manner, as well as encoding various types of a priori knowledge into the
prior distribution. Note that this framework can exploit the insights behind apprenticeship learning
algorithms even if they do not explicitly learn a reward function (e.g., MWAL [10]).
4
A Gradient Method for Finding the MAP Reward Function
We have proposed a unifying framework for Bayesian IRL and suggested that the MAP estimate can
be a better solution to the IRL problem. We can then reformulate the IRL problem into the posterior
optimization problem, which is finding RMAP that maximizes the (log unnormalized) posterior:
RMAP = argmaxR P (R|X ) = argmaxR [log P (X |R) + log P (R)]
Before presenting a gradient method for the optimization problem, we show that the generalized
likelihood is differentiable almost everywhere.
The likelihood is defined for measuring the compatibility of the reward function R with the behaviour data X . This is often accomplished using the optimal value function V ? or the optimal
Q-function Q? w.r.t. R. For example, the empirical value of X is compared with V ? [6, 8], X
is directly compared to the learned policy (e.g. the greedy policy from Q? ) [9], or the probability
of following the trajectories in X is computing using Q? [13]. Thus, we generally assume that
P (X |R) = g(X , V ? (R)) or g(X , Q? (R)) where g is differentiable w.r.t. V ? or Q? . The remaining question is the differentiability of V ? and Q? w.r.t. R, which we address in the following two
theorems (The proofs are provided in the supplementary material.):
Theorem 2 V ? (R) and Q? (R) are convex.
Theorem 3 V ? (R) and Q? (R) are differentiable almost everywhere.
Theorems 2 and 3 relate to the previous work on gradient methods for IRL. Neu and Szepesv?ari [9]
showed that Q? (R) is Lipschitz continuous, and except on a set of measure zero (almost everywhere), it is Fr?echet differentiable by Rademacher?s theorem. We have obtained the same result
5
based on the reward optimality region, and additionally identified the condition for which V ? (R)
and Q? (R) are non-differentiable (refer to the proof for details). Ratliff et al. [8] used a subgradient of their objective function because it involves differentiating V ? (R). Using Theorem 3 for
computing the subgradient of their objective function yields an identical result.
Assuming a differentiable prior, we can compute the gradient of the posterior using the result in Theorem 3 and the chain rule. If the posterior is convex, we will find the MAP reward function. Otherwise, as in [9], we will obtain a locally optimal solution. In the next section, we will experimentally
show that the locally optimal solutions are nonetheless better than the posterior mean in practice.
This is due to the property that they are generally found within the reward optimality region w.r.t.
the policy consistent with the behaviour data.
The gradient method uses the update rule Rnew ? R + ?t ?R P (R|X ) where ?t is an appropriate
step-size (or learning rate). Since computing ?R P (R|X ) involves computing an optimal policy
for the current reward function and a matrix inversion, caching these results helps reduce repetitive
computation. The idea is to compute the reward optimality region for checking whether we can
reuse the cached result. If Rnew is inside the reward optimality region of an already visited reward
function R? , they share the same optimal policy and hence the same ?R V ? (R) or ?R Q? (R).
Given policy ?, the reward optimality region is defined by H ? = I ? (I A ? ?T )(I ? ?T ? )?1 E ? ,
and we can reuse the cached result if H ? ? Rnew ? 0. The gradient method using this idea is
presented in Algorithm 1.
Algorithm 1 Gradient method for MAP inference in Bayesian IRL
Input: MDP\R, behaviour data X , step-size sequence {?t }, number of iterations N
1: Initialize R
2: ? ? solveMDP(R)
3: H ? ? computeRewardOptRgn(?)
4: ? ? {h?, H ? i}
5: for t = 1 to N do
6:
Rnew ? R + ?t ?R P (R|X )
7:
if isNotInRewardOptRgn(Rnew , H ? ) then
8:
h?, H ? i ? findRewardOptRgn(Rnew , ?)
9:
if isEmpty(h?, H ? i) then
10:
? ? solveMDP(Rnew )
11:
H ? ? computeRewardOptRgn(?)
12:
? ? ? ? {h?, H ? i}
13:
end if
14:
end if
15:
R ? Rnew
16: end for
5
Experimental Results
The first set of experiments was conducted in N ? N gridworlds [7]. The agent can move west,
east, north, or south, but with probability 0.3, it fails and moves in a random direction. The grids
N 2
are partitioned into M ? M non-overlapping regions, so there are ( M
) regions. The basis function
is defined by a 0-1 indicator function for each region. The linearly parameterized reward function
is determined by the weight vector w sampled i.i.d. from a zero mean Gaussian prior with variance
0.1 and |wi | ? 1 for all i. The discount factor is set to 0.99.
We compared the performance of our gradient method to those of other IRL algorithms in the literature: Maximum Margin Planning (MMP) [8], Maximum Entropy (MaxEnt) [11], Policy Matching
with natural gradient (NatPM) and the plain gradient (PlainPM) [9], and Bayesian Inverse Reinforcement Learning (BIRL) [13]. We executed our gradient method for finding MAP using three
different choices of the likelihood: B denotes the BIRL likelihood, and E and J denote the likelihood
with fE and fJ , respectively. For the Bayesian IRL algorithms (BIRL and MAP), two types of the
prior are prepared: U denotes the uniform prior and G denotes the true Gaussian prior. We evaluated
the performance of the algorithms using the difference between V ? (the value of the expert?s policy)
and V L (the value of the optimal policy induced by the learned weight wL measured on the true
weight w? ) and the difference between w? and wL using L2 norm.
6
Table 2: Results in the gridworld problems.
k w ? ? w L k2
24 ? 24
dim(w)
144
576
3.04 6.84 16.83
3.77 6.63 16.60
6.05 11.98 22.11
0.85 1.26 2.38
3.27 5.67 n.a.
0.86 1.36 n.a.
4.45 8.46 13.87
0.83 1.30 2.40
0.83 1.22 2.33
0.48 1.10 2.30
64
24 ? 24
256 1024
3.50 8.88 21.25
5.21 9.05 17.36
7.91 15.48 25.52
0.83 1.61 3.17
3.78 7.89 n.a.
0.98 1.71 n.a.
5.68 10.50 18.21
0.94 1.62 3.17
0.76 1.53 3.13
0.65 1.51 3.11
15
2
1
0
20
40
60
CPU time (sec)
80
10
5
0
36
0
20
40
60
CPU time (sec)
80
(a)
144
32 ? 32
576
64
2.49 8.97 8.74
0.15 0.67 0.51
0.33 0.60 0.60
10.74 16.32 13.72
1.38 0.80 n.a.
2.21 0.54 n.a.
0.13 0.57 1.06
0.16 0.45 0.40
0.19 0.44 0.42
0.17 0.42 0.37
k w? ? wL k 2
3
V??VL
k w? ? wL k 2
NatPM
PlainPM
MaxEnt
MMP
BIRL-U
BIRL-G
MAP-B-U
MAP-B-G
MAP-E-G
MAP-J-G
36
32 ? 32
5
20
4
3
2
1
256 1024
1.08 12.84 10.97
0.41 1.28 1.91
0.95 2.22 2.91
13.58 10.59 8.87
0.35 2.24 n.a.
0.50 0.90 n.a.
1.63 1.34 2.17
0.41 0.77 0.87
0.43 1.29 1.88
0.38 0.90 1.21
V??VL
|S|
V??VL
0
200
400
600
CPU time (sec)
800
BIRL
MAP?B
10
0
0
200
400
600
CPU time (sec)
800
(b)
Figure 2: CPU timing results of BIRL and MAP-B in 24 ? 24 gridworld problem. (a) dim(w) = 36.
(b) dim(w) = 144.
We used training data with 10 trajectories of 50 time steps, collected from the simulated runs of
the expert?s policy. Table 2 shows the average performance over 10 training data. Most of the
algorithms found the weight that induces an optimal policy whose performance is as good as that
of the expert?s policy (i.e., small V ? ? V L ) except for MMP and NatPM. The poor performance of
MMP was due to the small size in the training data, as already noted in [14]. The poor performance
of NatPM may be due to the ineffectiveness of pseudo-metric in high dimensional reward spaces,
since PlainPM was able produce good performance. Regarding the learned weights, the algorithms
using the true prior (MMP, BIRL, and the variants of MAP) found the weight close to the true one
(i.e., small ||w? ? wL ||2 ). Comparing BIRL and MAP-B is especially meaningful since they share
the same prior and likelihood. The only difference was in computing the mean versus MAP from the
posterior. MAP-B was consistently better than BIRL in terms of both ||w? ? wL ||2 and V ? ? V L .
Finally, we note that the correct prior yields small ||w? ? wL ||2 and V ? ? V L when we compare
PlainPM, MaxEnt, BIRL-U, and MAP-B-U (uniform prior) to MAP-J-G, MAP-E-G, BIRL-G, and
MAP-B-G (Gaussian prior), respectively.
Figure 2 compares the CPU timing results of the MCMC algorithm in BIRL and the gradient method
in MAP-B for the 24?24 gridworld with 36 and 144 basis functions. BIRL takes much longer
CPU time to converge than MAP-B since the former takes much larger number of iterations to
converge, and in addition, each iteration requires solving an MDP with a sampled reward function.
The CPU time gap gets larger as we increase the dimension of the reward function. Caching the
optimal policies and gradients sped up the gradient method by factors of 1.5 to 4.2 until convergence,
although not explicitly shown in the figure.
The second set of experiments was performed on a simplified car race problem, modified from [14].
The racetrack is shown in Figure 3. The shaded and white cells indicate the off-track and on-track
locations, respectively. The state consists of the location and velocity of the car. The velocities in the
vertical and horizontal directions are represented as 0, 1, or 2, and the net velocity is computed as
the squared sum of directional velocities. The net velocity is regarded as high if greater than 2, zero
if 0, and low otherwise. The car can increase, decrease, or maintain one of the directional velocities.
The control of the car succeeds with p=0.9 if the net velocity is low, but p=0.6 if high. If the control
fails, the velocity is maintained, and if the car attempts to move outside the racetrack, it remains in
the previous location with velocity 0. The basis functions are 0-1 indicator functions for the goal
locations, off-track locations, and 3 net velocity values (zero, low, high) while the car is on track.
Hence, there are 3150 states, 5 actions, and 5 basis functions. The discount factor is set to 0.99.
7
Table 3: True and learned weights in the car race problem.
Goal
Fast expert
BIRL
MAP-B
Off-track
1.00
0.96?0.02
1.00?0.00
Velocity while on track
0.00
-0.20?0.03
-0.19?0.02
Zero
Low
High
0.00
-0.04?0.01
-0.03?0.01
0.00
-0.12?0.02
-0.13?0.01
0.10
0.32?0.02
0.29?0.01
Table 4: Statistics of the policies simulated in the car race problem.
Avg. steps
Fast expert
BIRL
MAP-B
Avg. steps in locations
Avg. steps in velocity
to goal
Off-track
On-track
Zero
Low
High
20.41
32.98?6.42
24.77?1.92
1.56
2.13?0.60
1.68?0.26
17.85
29.85?6.03
22.09?1.71
2.01
3.33?0.52
2.70?0.16
3.40
4.34?0.79
3.38?0.18
12.44
22.18?4.84
16.01?1.48
We designed two experts. The slow expert prefers low velocity and avoids the off-track locations,
w = [1, ?0.1, 0, 0.1, 0]. The fast expert prefers high velocity, w = [1, 0, 0, 0, 0.1]. We compared the
posterior mean and the MAP using the prior P (w1 )=N (1, 1) and P (w2 )=P (w3 )=P (w4 )=P (w5 )=
N (0, 1) assuming that we do not know the experts? preference on the locations nor the velocity, but
we know the experts? ultimate goal is to reach one of the goal locations. We used BIRL for the
posterior mean and MAP-B for the MAP estimation, hence using the identical prior and likelihood.
We used 10 training data, each consisting of 5 trajectories. We omit
the results regarding the slow expert since both BIRL and MAPB successfully found the weight similar with the true one, which
induced the slow expert?s policy as optimal. However for the fast
expert, MAP-B was significantly better than BIRL.1 Table 3 shows
the true and learned weights, and Table 4 shows some statistics
characterizing the expert?s and learned policies. The policy from
Figure 3: Racetrack.
BIRL tends to remain in high speed on the track for significantly
more steps than the one from MAP-B since BIRL converged to a larger ratio of w5 to w1 .
G G
G G
S
S
6
Conclusion
We have argued that, when using a Bayesian framework for learning reward functions in IRL, the
MAP estimate is preferable over the posterior mean. Experimental results confirmed the effectiveness of our approach. We have also shown that the MAP estimation approach subsumes nonBayesian IRL algorithms in the literature, and allows us to incorporate various types of a priori
knowledge about the reward functions and the measurement of the compatibility with behaviour
data.
We proved that the generalized posterior is differentiable almost everywhere, and proposed a gradient method to find a locally optimal solution to the MAP estimation. We provided the theoretical
result equivalent to the previous work on gradient methods for non-Bayesian IRL, but used a different proof based on the reward optimality region.
Our work could be extended in a number of ways. For example, the IRL algorithm for partially
observable environments in [15] mostly relies on Ng and Russell [6]?s heuristics for MDPs, but our
work opens up new opportunities to leverage the insight behind other IRL algorithms for MDPs.
Acknowledgments
This work was supported by National Research Foundation of Korea (Grant# 2009-0069702) and the
Defense Acquisition Program Administration and the Agency for Defense Development of Korea
(Contract# UD080042AD)
1
All the results in Table 4 except for the average number of steps in the off-track locations are statistically
significant at the 95% confidence level.
8
References
[1] S. Russell. Learning agents for uncertain environments (extended abstract). In Proceedings of COLT,
1998.
[2] P. R. Montague and G. S. Berns. Neural economics and the biological substrates of valuation. Neuron,
36(2), 2002.
[3] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration.
Robotics and Autonomous Systems, 57(5), 2009.
[4] Y. Niv. Reinforcement learning in the brain. Journal of Mathematical Psychology, 53(3), 2009.
[5] E. Hopkins. Adaptive learning models of consumer behavior. Journal of Economic Behavior and Organization, 64(3?4), 2007.
[6] A. Y. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proceedings of ICML, 2000.
[7] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of
ICML, 2004.
[8] N. D. Ratliff, J. A. Bagnell, and M. A. Zinkevich. Maximum margin planning. In Proceedings of ICML,
2006.
[9] G. Neu and C. Szepesv?ari. Apprenticeship learning using inverse reinforcement learning and gradient
methods. In Proceedings of UAI, 2007.
[10] U. Syed and R. E. Schapire. A game-theoretic approach to apprenticeship learning. In Proceedings of
NIPS, 2008.
[11] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning.
In Proceedings of AAAI, 2008.
[12] G. Neu and C. Szepesv?ari. Training parsers by inverse reinforcement learning. Machine Learning, 77(2),
2009.
[13] D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. In Proceedings of IJCAI, 2007.
[14] A. Boularias and B. Chaib-Draa. Bootstrapping apprenticeship learning. In Proceedings of NIPS, 2010.
[15] J. Choi and K. Kim. Inverse reinforcement learning in partially observable environments. In Proceedings
of IJCAI, 2009.
9
| 4479 |@word h:2 inversion:1 norm:1 open:1 solid:1 initial:3 contains:1 rightmost:1 current:1 comparing:3 wd:1 designed:1 update:1 greedy:2 imitate:1 amir:3 provides:2 location:10 preference:7 mathematical:1 constructed:1 direct:2 eung:1 consists:1 inside:1 manner:1 apprenticeship:6 ra:2 expected:1 behavior:2 planning:2 nor:1 brain:1 bellman:1 discounted:1 resolve:1 cpu:8 provided:6 pof:1 bounded:2 notation:3 underlying:1 maximizes:5 rmax:3 minimizes:2 argall:1 unified:2 finding:8 bootstrapping:1 guarantee:3 pseudo:1 every:1 preferable:1 k2:1 control:2 grant:1 omit:1 before:1 understood:1 timing:2 tends:1 encoding:3 path:1 black:2 shaded:1 range:1 statistically:1 acknowledgment:1 practice:1 area:2 empirical:1 w4:1 significantly:2 matching:2 word:1 induce:2 inp:1 argmaxa:1 confidence:2 get:2 close:2 optimize:1 equivalent:4 map:48 zinkevich:1 economics:2 starting:1 convex:2 survey:1 insight:3 rule:2 regarded:1 searching:2 autonomous:1 analogous:1 suppose:1 parser:1 programming:1 substrate:1 us:2 element:6 velocity:15 satisfying:1 birl:26 region:16 russell:7 decrease:1 environment:6 pd:1 agency:1 reward:92 ziebart:2 rewrite:1 solving:1 basis:8 indirect:2 montague:1 various:5 represented:1 fast:4 monte:2 choosing:1 outside:2 whose:3 emerged:1 posed:1 kaist:2 supplementary:3 larger:3 heuristic:1 otherwise:2 statistic:2 sequence:2 differentiable:7 net:4 propose:1 fr:1 combining:1 degenerate:1 ud080042ad:1 convergence:1 ijcai:2 extending:1 rademacher:1 produce:1 cached:2 executing:2 help:1 ac:2 measured:1 c:1 involves:2 come:1 indicate:1 direction:2 correct:1 human:1 material:3 argued:1 behaviour:22 abbeel:2 niv:1 preliminary:1 biological:1 mwal:2 exp:3 mapping:2 rmap:2 a2:4 uniqueness:2 estimation:7 visited:1 wl:7 successfully:1 reflects:1 argmaxr:2 gaussian:5 always:1 modified:1 rather:1 caching:2 corollary:2 encode:1 consistently:1 likelihood:15 contrast:1 kim:2 sense:1 am:1 posteriori:2 inference:4 dim:3 browning:1 vl:3 typically:1 integrated:3 entire:2 compatibility:6 among:1 ill:1 colt:1 denoted:2 priori:2 development:1 animal:1 constrained:1 softmax:1 initialize:1 ng:8 identical:4 icml:3 employ:2 ineffectiveness:1 national:1 consisting:2 jaedeug:1 maintain:1 attempt:1 organization:1 w5:2 sh:5 chernova:1 behind:2 chain:6 tuple:1 necessary:2 korea:4 draa:1 maxent:4 circle:1 theoretical:1 uncertain:1 formalism:1 soft:1 measuring:1 stacking:1 deviation:2 uniform:8 conducted:1 optimally:1 chooses:3 st:3 ie:2 contract:1 off:6 hopkins:1 w1:2 again:2 squared:1 aaai:1 boularias:1 choose:1 worse:1 expert:29 return:1 star:1 summarized:1 sec:4 north:1 subsumes:1 satisfy:1 explicitly:2 race:3 vi:3 performed:1 view:1 try:1 red:1 bayes:1 square:1 variance:1 yield:6 directional:2 generalize:1 bayesian:22 unifies:1 carlo:2 trajectory:9 confirmed:1 history:1 ah:5 converged:1 reach:1 neu:5 nonetheless:1 acquisition:1 frequency:1 echet:1 naturally:1 proof:5 sampled:3 chaib:1 proved:1 knowledge:3 car:8 improves:1 arranged:1 evaluated:2 formulation:1 dey:1 until:1 ramachandran:3 hand:1 horizontal:1 irl:39 gridworlds:1 assessment:2 overlapping:1 mdp:9 building:1 true:12 former:1 hence:7 regularization:2 white:1 game:2 maintained:1 noted:1 unnormalized:1 leftmost:1 generalized:2 presenting:1 theoretic:2 l1:1 temperature:1 fj:3 ari:5 recently:1 behaves:1 sped:1 empirically:2 kekim:1 refer:4 s5:7 measurement:1 significant:1 ai:1 grid:1 pm:5 similarly:1 robot:2 similarity:1 longer:1 racetrack:3 posterior:33 showed:1 optimizing:1 inequality:1 accomplished:1 greater:1 determine:1 converge:2 smooth:1 match:1 veloso:1 cross:1 a1:4 laplacian:1 variant:1 subsuming:1 expectation:1 metric:1 repetitive:1 iteration:3 robotics:2 achieved:1 c1:1 cell:1 szepesv:5 addition:3 addressed:1 median:1 appropriately:1 w2:2 rest:2 south:1 induced:3 inconsistent:4 effectiveness:3 leverage:1 psychology:1 w3:1 identified:1 reduce:1 idea:2 regarding:2 economic:1 administration:1 whether:1 defense:2 ultimate:1 reuse:2 penalty:1 action:14 prefers:2 generally:3 involve:1 listed:1 s4:1 discount:4 prepared:1 locally:3 ph:5 induces:1 differentiability:3 demonstrator:1 schapire:2 exist:1 problematic:1 s3:1 neuroscience:1 estimated:1 track:11 blue:1 diverse:1 visitation:1 reformulation:1 changing:1 subgradient:2 sum:1 run:1 inverse:12 parameterized:2 everywhere:4 almost:4 decision:4 prefer:1 scaling:1 w1p:1 speed:1 optimality:17 formulating:1 structured:1 according:1 poor:2 remain:1 wi:2 lp:1 partitioned:1 making:3 s1:10 pr:1 equation:7 remains:1 slack:1 nonbayesian:1 know:3 end:3 adopted:1 appropriate:2 denotes:7 remaining:1 opportunity:1 unifying:1 exploit:1 especially:1 objective:8 move:5 already:3 question:1 bagnell:2 gradient:19 simulated:2 valuation:1 collected:1 trivial:1 assuming:3 consumer:1 modeled:1 mini:1 reformulate:1 ratio:1 demonstration:1 executed:2 mostly:1 fe:3 relate:1 ratliff:3 policy:62 boltzmann:1 vertical:1 neuron:1 markov:4 sm:5 finite:2 immediate:1 situation:1 subsume:1 extended:2 gridworld:3 inferred:2 pair:1 cast:1 fv:3 learned:8 nip:2 address:4 able:1 suggested:1 rnew:8 challenge:1 program:1 max:2 including:4 syed:2 difficulty:1 natural:1 indicator:2 advanced:1 representing:1 technology:1 mdps:3 prior:27 literature:6 l2:2 checking:1 loss:3 versus:1 foundation:1 agent:6 sufficient:2 consistent:1 s0:1 principle:2 share:2 maas:1 supported:1 last:1 bern:1 institute:1 explaining:1 taking:1 characterizing:1 differentiating:1 absolute:1 fg:2 boundary:1 calculated:1 plain:1 transition:3 dimension:1 rich:1 avoids:1 commonly:1 reinforcement:13 avg:3 simplified:1 adaptive:1 observable:2 uai:1 search:1 continuous:1 decade:1 table:9 additionally:2 learn:2 robust:2 inherently:1 domain:1 significance:1 main:1 linearly:2 s2:3 succinct:1 west:1 fashion:1 slow:3 sub:1 inferring:1 fails:2 explicit:1 mmp:6 exponential:1 theorem:10 choi:2 specific:1 kr:2 margin:4 gap:1 entropy:4 simply:1 infinitely:4 partially:2 satisfies:1 relies:1 identity:1 formulated:3 kee:1 marked:1 goal:5 lipschitz:1 experimentally:1 daejeon:1 infinite:2 determined:3 except:3 experimental:2 succeeds:1 east:1 meaningful:1 arises:1 incorporate:1 mcmc:3 |
3,843 | 448 | Forward Dynamics Modeling
of Speech Motor Control
Using Physiological Data
Makoto Hirayama
Eric Vatikiotis-Bateson
Mitsuo Kawato
ATR Auditory and Visual Perception Research Laboratories
2 - 2, Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN
Michael I. Jordan
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
We propose a paradigm for modeling speech production based on neural
networks. We focus on characteristics of the musculoskeletal system. Using
real physiological data - articulator movements and EMG from muscle activitya neural network learns the forward dynamics relating motor commands to
muscles and the ensuing articulator behavior. After learning, simulated
perturbations, were used to asses properties of the acquired model, such as natural
frequency, damping, and interarticulator couplings. Finally, a cascade neural
network is used to generate continuous motor commands from a sequence of
discrete articulatory targets.
1 INTRODUCTION
A key problem in the formal study of human language is to understand the process by
which linguistic intentions become speech. Speech production entails extraordinary
coordination among diverse neurophysiological and anatomical structures from which
unfolds through time a complex acoustic signal that conveys to listeners something of
the speaker's intention. Analysis of the speech acoustics has not revealed the encoding of
these intentions, generally conceived to be ordered strings of some basic unit, e.g., the
phoneme. Nor has analysis of the articulatory system provided an answer, although
recent pioneering work by Jordan (1986), Saltzman (1986), Laboissiere (1990) and others
191
192
Hirayama, Vatikiotis-Bateson, Kawato, and Jordan
has brought us closer to an understanding of the articulatory-to-acoustic transform and has
demonstrated the importance of modeling the articulatory system's temporal properties.
However, these efforts have been limited to kinematic modeling because they have not
had access to the neuromuscular activity of the articulatory structures.
In this study, we are using neural networks to model speech production. The principle
steps of this endeavor are shown in Figure 1. In this paper, we focus on characteristics of
the musculoskeletal system. Using real physiological data - articulator movements and
EMG from muscle activity - a neural network learns the forward dynamics relating motor
commands to muscles and the ensuing articulator behavior. After learning, a cascade
neural network model (Kawato, Maeda, Uno, & Suzuki, 1990) is used to generate
continuous motor commands.
Intention to Speak
Intended Phoneme Sequence
Global Performance Parameters
Transformation from Phoneme to Gesture
Articulatory Targets
Motor Command Generation
Motor Command
Musculo-Skeletal System
Articulator Trajectories
Transformation from Articulatory Movement to Acoustic Signal
Acoustic Wave Radiation
Figure 1: Forward Model of Speech Production
2 EXPERIMENT
Movement, EMG, and acoustic data were recorded for one speaker who produced reiterant
versions of two sentences. Speaking rate was fast and the reiterant syllables were ba, boo
Figure 2 shows approximate marker positions for tracking positions of the jaw
(horizontal and vertical) and lips (vertical only) and muscle insertion points for hookedwire, bipolar EMG recording from four muscles: ABD (anterior belly of the digastric) for
jaw lowering, OOI(orbicularis oris inferior) and MTL (mentalis) for lower lip raising and
protrusion, and GGA (genioglossus anterior) for tongue tip lowering.
All movement and EMG (rectified and integrated) signals were digitized (12 bit) at 200 Hz
and then numerically smoothed at 40 Hz. Position signals were differentiated to obtain
velocity and then, after smoothing at 22 Hz, differentiated again to get acceleration.
Figure 3 shows data for one reiterant utterance using ba.
Forward Dynamics Modeling of Speech Motor Control
Articulator
UL: u~er lip (vertical)
LL
lower lip (vertical)
JX
jaw (horizontal)
JY
jaw (vertical)
Muscle
ABD : anterior belly of the digastric
oOI-___~r~
001 : orbicularis oris inferior
MTL : mentalis
MTL .:..-:-~~~
GGA : genioglossus anterior
ABD
Figure 2: Approximate Positions of Markers and Muscle Insertion
for Recording Movement and EMG
Audio
UL
tJ)
LL
0
Q.
JX
JY
UL
-'
w LL
>
JX
JY
UL
0
LL
0
C(
JX
JY
ABO
Cl
001
W
MTL
GGA
:E
0
1
2
Time [51
3
4
Figure 3: Time Series Representations for All Channels
of One Reiterant Rendition Using ba
5
193
194
Hirayama, Vatikiotis-Bateson, Kawaro, and Jordan
3 FORWARD DYNAMICS MODELING OF THE MUSCULOSKELETAL SYSTEM AND TRAJECTORY PREDICTION
FROM MUSCLE EMG
The forward dynamics model (FDM) for ba, bo production was obtained using a threelayer perceptron with back propagation (Rumelhart, Hinton, & Williams, 1986). The
network learns the correlations between position, velocity, EMG at time 1 and the
changes of position and velocity for all articulators at the next time sample 1+1.
After learning, the forward dynamics model is connected recurrently as shown in Figure 4.
The network uses only the initial articulator position and velocity values and the
continuous EMG "motor command" input to generate predicted trajectories. The FDM
estimates the changes of position and velocity and sums them with position and velocity
values of the previous sample Ito obtain estimated values at the next sample 1+1.
Figure 5 compares experimentally observed trajectories with trajectories predicted by this
network. Spatiotemporal characteristics are very similar, e.g., amplitude, frequency, and
phase, and demonstrate the generally good perfonnance of the model. There is, however,
a tendency towards negative offset in the predicted positions. There are two important
limitations that reduce the current model's ability to compensate for position shifts in the
test utterance. First, there is no specified equilibrium or rest position in articulator space,
towards which articulators might tend in the absence of EMG activity. Second, the
acquired FDM is based on limited EMG; at most there is correlated EMG for only one
direction of motion per articulator. Addition of antagonist EMG and/or an estimate of
equilibrium position in articulator or, eventually, task coordinates should increase the
model's generalization capability.
Predicted
Trajectory
Position
Velocity
~
Forward
DynamiCS
Model
"v'
6Position
Position
EMG
6Velocity
Velocity
1\/
Figure 4: Recurrent Network for Trajectory Prediction from Muscle EMG
Forward Dynamics Modeling of Speech Motor Control
Network Output
c
Observed Trajectory
UL
0
;::
LL
0
Q.
JX
'iii
~~~ .."
JY
-
UL
>- LL
'0u
Q)
>
JX
JY
0
123
Time [s]
4
5
Figure 5: Experimentally Observed vs. Predicted Trajectories
4 ESTIMATION OF DYNAMIC PARAMETER
To investigate quantitative characteristics of the obtained forward dynamics model, the
model system's response to two types of simulated perturbation were examined.
The first simulated perturbation confirmed that the model system indeed learned an
appropriate nonlinear dynamics and affords a rough estimation of the its visco-elastic
properties, such as natural frequency (1.0 Hz) and damping ratio (0.24). Simulated release
of the lower lip at various distances from rest revealed underdamped though stable
behavior, as shown in Figure 6a.
The second perturbation entailed observing articulator response to a step increase (50 % of
full-scale) in EMG activity for each muscle. Figure 6b demonstrates that the learned
relation between EMG input and articulator movement output is dynamical rather than
kinematic because articulator responses are not instantaneous. Learned responses to each
muscle's activation also show some interesting and reasonable (though not always correct)
couplings between different articulators.
195
196
Hirayama, Vatikiotis-Bateson, Kawato, and Jordan
0.6
,,~~~~~h~~~~~~~~~~
0.4
.. ~
ABO
a0.2
-en
- - r -- -- -: -- -- -- t -- -- -:
~
.f 0.0
UL
LL
JX
-0.2
a.
2
3
TIme [s]
4
MTL
GGA
Release of Lower Lip
from Rest Position + 0.2
5
~
001 ~""~_-!'--~--.-i--...-"""'----!
\.~ ,~ ... ~ .-. .-.
-:
I': . . . . . . . . .~: . . . . . . . . .:. .
J""II
r.
_.:i
:
~~~h~~~~~~~~~~~~~~
-
JY
1
~
0
1
..
2
3
TIme [s)
..
.
4
5
b. Response of Step
Increase (+0.5) in EMG
Figure 6: Visco-Elastic Property of the FDM Observed by Simulated Perturbations
5 MOTOR COMMAND GENERATION USING CASCADE
NEURAL NETWORK MODEL
Observed articulator movements are smooth. Their smoothness is due partly to physical
dynamic properties (inertia, viscosity). Furthermore, smoothness may be an attribute of
the motor command itself, thereby resolving the ill-posed computational problem of
generating continuous motor commands from a small number of discrete articulatory
targets.
To test this, we incorporated a smoothness constraint on the motor command (rectified
EMG, in this case), which is conceptually similar to previously proposed constraints on
change of torque (Uno, Kawato, & Suzuki, 1989) and muscle-tension (Uno, Suzuki, &
Kawato, 1989). Two articulatory target (via-point) constraints were specified spatially,
one for consonant closure and the other for vowel opening, and assigned to each of the 21
consonant + vowel syllables. The alternating sequence of via-points was isochronous
(temporally equidistant) except for initial, medial and final pauses. The cascade neural
network (Figure 7) then generated smooth EMG and articulator trajectories whose
spatiotemporal asymmetry approximated the prosodic patterning of the natural test
utterances (Figure 8). Although this is only a preliminary implementation of via-point
and smoothness constraints, the model's ability to generate trajectories of appropriate
spatiotemporal complexity from a series of alternating via-point inputs is encouraging.
Forward Dynamics Modeling of Speech Motor Control
." t
~
initial gesture
position
velocity
sequence of articulatory targets
+. ~ position, ~ velocity
r--'----.........,
FDM
FDM
???
...
FDM
~"...-.-
smoothness
.
constraint
time ---- ------------- ---------~
position, velocity
motor command realized
articulator
trajectory
musclulo-skeletal .......~
system
generated motor command
Figure 7: Cascade Neural Network Model for Motor Command Generation
UL
LL
JX
JY
ABO
CJ
:E
w
001
MTL
GGA
0
1
3
2
4
Time [5]
Figure 8: Generated Motor Command (EMG) with Trajectory
To Satisfy Articulatory Targets
5
197
198
Hirayama, Vatikiotis-Bateson, Kawato, and Jordan
6 CONCLUSION AND FUTURE WORK
Our intent here has been to provide a preliminary model of speech production based on the
articulatory system's dynamical properties. We used real physiological data - EMGto obtain the forward dynamics model of the articulators from a multilayer perceptron.
After training, a recurrent network predicted articulator trajectories using the EMG signals
as the motor command input. Simulated perturbations were used to examine the model
system's response to isolated inputs and to assess its visco-elastic properties and
interarticulator couplings. Then, we incorporated a reasonable smoothness criterion minimum-motor-command-change - into a cascade neural network that generated
realistic trajectories from a bead-like string of via-points.
We are now attempting to model various styles of real speech using data from more
muscles and articulators such as the tongue. Also, the scope of the model is being
expanded to incorporate global perfonnance parameters for motor command generation,
and the transformations from phoneme to articulatory gesture and from articulatory
movement to acoustic signal.
Finally, a main goal of our work is to develop engineering applications for speech
synthesis and recognition. Although our model is still preliminary, we believe resolving
the difficulties posed by coarticulation, segmentation, prosody, and speaking style
ultimately depends on understanding physiological and computational aspects of speech
motor control.
Acknowledgem ent
We thank Vincent Gracco and Kiyoshi Oshima for muscle insertions; Haskins
Laboratories for use of their facilities (NIH grant DC-00121); Kiyoshi Honda, Philip
Rubin, Elliot Saltzman and Yoh'ichi Toh'kura for insightful discussion; and Kazunari
Nakane and Eiji Yodogawa for continuous encouragement. Further support was provided
by HFSP grants to M. Kawato and M. I. Jordan.
References
Jordan, M. I. (1986) Serial order: a parallel distributed processing approach, ICS (Institute
for Cognitive Science, University of California) Report. 8604.
Kawato, M .? Maeda. M., Uno, Y. & Suzuki. R. (1990) Trajectory Formation of Arm
Movement by Cascade Neural Network Model Based on Minimum Torque-change
Criterion. Bioi. Cybern.62, 275-288.
Laboissiere, R .? Schwarz. 1. L. & Bailly. G. (1990) Motor Control for Speech Skills: a
Connectionist Approach. Proceeding o/the 1990 Summer School, Morgan Kaufmann
Publishers, 319-327.
Rumelhart, D.E., Hinton. G.E. & Williams, RJ.(1986) Learning Internal Representation
by Error Propagation. Parallel Distributed Processing Chap. 8. MIT Press.
Saltzman. E.L. (1986) Task dynamics coordination of the speech articulators: A
preliminary model. Experimental Brain Research. Series 15. 129-144.
Uno. Y., Kawato. M., & Suzuki, R. (1989) Formation and Control of Optimal
Trajectory in Human Multijoint Arm Movement, Bioi. Cybern. 61, 89-101.
Uno, Y., Suzuki. R. & Kawato, M. (1989) Minimum muscle-tension-change model
which reproduces human arm movement. Proceedings of the 4th symposium on
Biological and Physiological Engineering, 299-302, in Japanese.
| 448 |@word version:1 kura:1 closure:1 thereby:1 initial:3 series:3 current:1 anterior:4 activation:1 toh:1 realistic:1 motor:24 medial:1 v:1 patterning:1 yoh:1 honda:1 become:1 symposium:1 visco:3 acquired:2 indeed:1 behavior:3 seika:1 nor:1 examine:1 brain:2 torque:2 chap:1 encouraging:1 provided:2 musculo:1 string:2 transformation:3 temporal:1 ooi:2 quantitative:1 bipolar:1 demonstrates:1 control:7 unit:1 grant:2 engineering:2 yodogawa:1 encoding:1 might:1 examined:1 limited:2 cascade:7 intention:4 get:1 cybern:2 demonstrated:1 williams:2 coordinate:1 target:6 speak:1 us:1 velocity:12 rumelhart:2 approximated:1 recognition:1 observed:5 orbicularis:2 connected:1 movement:12 insertion:3 complexity:1 belly:2 dynamic:16 ultimately:1 abd:3 eric:1 threelayer:1 various:2 listener:1 fast:1 prosodic:1 formation:2 whose:1 posed:2 ability:2 transform:1 itself:1 final:1 sequence:4 underdamped:1 propose:1 protrusion:1 ent:1 asymmetry:1 generating:1 coupling:3 recurrent:2 develop:1 radiation:1 hirayama:5 school:1 predicted:6 direction:1 musculoskeletal:3 correct:1 attribute:1 coarticulation:1 human:3 generalization:1 preliminary:4 digastric:2 biological:1 kiyoshi:2 ic:1 equilibrium:2 scope:1 jx:8 estimation:2 multijoint:1 makoto:1 coordination:2 schwarz:1 brought:1 rough:1 mit:1 always:1 rather:1 command:18 linguistic:1 vatikiotis:5 focus:2 release:2 articulator:23 integrated:1 a0:1 relation:1 among:1 ill:1 smoothing:1 future:1 others:1 report:1 connectionist:1 opening:1 intended:1 phase:1 vowel:2 oshima:1 mitsuo:1 reiterant:4 investigate:1 kinematic:2 entailed:1 tj:1 articulatory:14 closer:1 isochronous:1 saltzman:3 damping:2 perfonnance:2 isolated:1 tongue:2 modeling:8 emg:22 answer:1 spatiotemporal:3 cho:1 michael:1 tip:1 synthesis:1 again:1 recorded:1 cognitive:2 style:2 japan:1 satisfy:1 depends:1 observing:1 wave:1 capability:1 parallel:2 ass:2 phoneme:4 characteristic:4 who:1 kaufmann:1 conceptually:1 vincent:1 produced:1 trajectory:17 confirmed:1 rectified:2 bateson:5 frequency:3 conveys:1 auditory:1 massachusetts:1 cj:1 segmentation:1 amplitude:1 back:1 mtl:6 tension:2 response:6 though:2 furthermore:1 correlation:1 horizontal:2 nonlinear:1 marker:2 propagation:2 believe:1 facility:1 assigned:1 spatially:1 alternating:2 laboratory:2 elliot:1 ll:8 inferior:2 speaker:2 criterion:2 antagonist:1 demonstrate:1 motion:1 instantaneous:1 nih:1 kawato:11 physical:1 relating:2 numerically:1 cambridge:1 smoothness:6 encouragement:1 language:1 had:1 access:1 entail:1 stable:1 something:1 recent:1 muscle:16 morgan:1 minimum:3 paradigm:1 signal:6 ii:1 resolving:2 full:1 rj:1 kyoto:1 smooth:2 gesture:3 compensate:1 serial:1 jy:8 prediction:2 basic:1 multilayer:1 addition:1 neuromuscular:1 publisher:1 rest:3 recording:2 hz:4 tend:1 jordan:8 revealed:2 iii:1 boo:1 equidistant:1 reduce:1 shift:1 ul:8 effort:1 gga:5 soraku:1 speech:16 speaking:2 generally:2 viscosity:1 eiji:1 generate:4 affords:1 estimated:1 conceived:1 per:1 anatomical:1 diverse:1 discrete:2 skeletal:2 ichi:1 key:1 four:1 lowering:2 sum:1 reasonable:2 bit:1 summer:1 syllable:2 activity:4 constraint:5 uno:6 aspect:1 attempting:1 expanded:1 jaw:4 department:1 previously:1 eventually:1 differentiated:2 appropriate:2 hikaridai:1 realized:1 distance:1 thank:1 atr:1 simulated:6 ensuing:2 philip:1 laboissiere:2 gun:1 ratio:1 negative:1 ba:4 intent:1 implementation:1 vertical:5 hinton:2 incorporated:2 digitized:1 dc:1 perturbation:6 smoothed:1 gracco:1 specified:2 sentence:1 raising:1 acoustic:7 california:1 learned:3 dynamical:2 perception:1 maeda:2 pioneering:1 fdm:7 rendition:1 natural:3 difficulty:1 pause:1 arm:3 technology:1 temporally:1 utterance:3 understanding:2 generation:4 limitation:1 interesting:1 rubin:1 principle:1 production:6 formal:1 understand:1 perceptron:2 institute:2 abo:3 distributed:2 unfolds:1 forward:13 suzuki:6 inertia:1 approximate:2 skill:1 global:2 reproduces:1 consonant:2 continuous:5 bead:1 lip:6 channel:1 elastic:3 complex:1 cl:1 japanese:1 main:1 en:1 extraordinary:1 position:20 ito:1 learns:3 insightful:1 er:1 recurrently:1 offset:1 physiological:6 importance:1 bailly:1 neurophysiological:1 visual:1 ordered:1 tracking:1 bo:1 ma:1 bioi:2 goal:1 endeavor:1 nakane:1 acceleration:1 towards:2 absence:1 change:6 experimentally:2 except:1 hfsp:1 partly:1 tendency:1 experimental:1 internal:1 support:1 prosody:1 incorporate:1 audio:1 correlated:1 |
3,844 | 4,480 | Generalised Coupled Tensor Factorisation
Y. Kenan Y?lmaz
A. Taylan Cemgil
Umut S?ims?ekli
Department of Computer Engineering
Bo?gazic?i University, Istanbul, Turkey
[email protected], {taylan.cemgil, umut.simsekli}@boun.edu.tr
Abstract
We derive algorithms for generalised tensor factorisation (GTF) by building upon
the well-established theory of Generalised Linear Models. Our algorithms are
general in the sense that we can compute arbitrary factorisations in a message
passing framework, derived for a broad class of exponential family distributions including special cases such as Tweedie?s distributions corresponding to ?divergences. By bounding the step size of the Fisher Scoring iteration of the GLM,
we obtain general updates for real data and multiplicative updates for non-negative
data. The GTF framework is, then extended easily to address the problems when
multiple observed tensors are factorised simultaneously. We illustrate our coupled
factorisation approach on synthetic data as well as on a musical audio restoration
problem.
1
Introduction
A fruitful modelling approach for extracting meaningful information from highly structured multivariate datasets is based on matrix factorisations (MFs). In fact, many standard data processing
methods of machine learning and statistics such as clustering, source separation, independent components analysis (ICA), nonnegative matrix factorisation (NMF), latent semantic indexing (LSI)
can be expressed and understood as MF problems. These MF models also have well understood
probabilistic interpretations as probabilistic generative models. Indeed, many standard algorithms
mentioned above can be derived as maximum likelihood or maximum a-posteriori parameter estimation procedures. It is also possible to do a full Bayesian treatment for model selection [1].
Tensors appear as a natural generalisation of matrix factorisation, when observed data and/or a latent
representation have several semantically meaningful dimensions. Before giving a formal definition,
consider the following motivating example
X i,r j,r k,r
X j,r p,r
X j,r q,r
X1i,j,k ?
Z1 Z2 Z3
X2j,p ?
Z2 Z4
X3j,q ?
Z2 Z5
(1)
r
r
r
where X1 is an observed 3-way array and X2 , X3 are 2-way arrays, while Z? for ? = 1 . . . 5 are
the latent 2-way arrays. Here, the 2-way arrays are just matrices but this can be easily extended to
objects having arbitrary number of indices. As the term ?N -way array? is awkward, we prefer using
the more convenient term tensor. Here, Z2 is a shared factor, coupling all models. As the first model
is a CP (Parafac) while the second and the third ones are MF?s, we call the combined factorization
as CP/MF/MF model. Such models are of interest when one can obtain different ?views? of the
same piece of information (here Z2 ) under different experimental conditions. Singh and Gordon
[2] focused on a similar problem called as collective matrix factorisation (CMF) or multi-matrix
factorisation, for relational learning but only for matrix factors and observations. In addition, their
generalised Bregman divergence minimisation procedure assumes matching link and loss functions.
For coupled matrix and tensor factorization (CMTF), recently [3] proposed a gradient-based allat-once optimization method as an alternative to alternating least square (ALS) optimization and
1
demonstrated their approach for a CP/MF coupled model. Similar models are used for proteinprotein interactions (PPI) problems in gene regulation [4].
The main motivation of the current paper is to construct a general and practical framework for
computation of tensor factorisations (TF), by extending the well-established theory of Generalised
Linear Models (GLM). Our approach is also partially inspired by probabilistic graphical models:
our computation procedures for a given factorisation have a natural message passing interpretation.
This provides a structured and efficient approach that enables very easy development of application
specific custom models, priors or error measures as well as algorithms for joint factorisations where
an arbitrary set of tensors can be factorised simultaneously. Well known models of multiway analysis
(Parafac, Tucker [5]) appear as special cases and novel models and associated inference algorithms
can be automatically be developed. In [6], the authors take a similar approach to tensor factorisations
as ours, but that work is limited to KL and Euclidean costs, generalising MF models of [7] to the
tensor case. It is possible to generalise this line of work to ?-divergences [8] but none of these works
address the coupled factorisation case and consider only a restricted class of cost functions.
2
Generalised Linear Models for Matrix/Tensor Factorisation
To set the notation and our approach, we briefly review GLMs following closely the original notation
of [9, ch 5]. A GLM assumes that a data vector x has conditionally independently drawn components
xi according to an exponential family density
x ? ? b(? )
?b(?i )
? 2 b(?i )
i i
i
xi ? exp
var(xi ) = ? 2
? c(xi , ? )
hxi i = x
?i =
(2)
2
?
??i
??i2
Here, ?i are canonical parameters and ? 2 is a known dispersion parameter. hxi i is the expectation of
xi and b(?) is the log partition function, enforcing normalization. The canonical parameters are not
directly estimated, instead one assumes a link function g(?) that ?links? the mean of the distribution
x
?i and assumes that g(?
xi ) = li? z where li? is the ith row vector of a known model matrix L and
z is the parameter vector to be estimated, A? denotes matrix transpose of A. The model is linear
in the sense that a function of the mean is linear in parameters, i.e., g(?
x) = Lz . A Linear Model
(LM) is a special case of GLM that assumes normality, i.e. xi ? N (xi ; x
?i , ? 2 ) as well as linearity
that implies identity link function as g(?
xi ) = x
?i = li? z assuming li are known. Logistic regression
assumes a log link, g(?
xi ) = log x
?i = li? z; here log x
?i and z have a linear relationship [9].
The goal in classical GLM is to estimate the parameter vector z. This is typically achieved via
a Gauss-Newton method (Fisher Scoring). The necessary objects for this computation are the log
likelihood, the derivative and the Fisher Information (the expected value of negative of the Fisher
Score). These are easily derived as:
X
X
1 X
?L
L=
[xi ?i ? b(?i )]/? 2 ?
= 2
c(xi , ? )
(xi ? x
?i )wi gx? (?
xi )li?
(3)
?z
?
i
i
i
2
?L
1
1 ?
? L
= 2 L? DL
= 2 L DG(x ? x
?)
(4)
2
?z
?
?z
?
where w is a vector with elements wi , D and G are the diagonal matrices as D = diag(w), G =
diag(gx? (?
xi )) and
?1
?g(?
xi )
(5)
wi = v(?
xi )gx2? (?
xi )
gx? (?
xi ) =
?x
?i
with v(?
xi ) being the variance function related to the observation variance by var(xi ) = ? 2 v(?
xi ).
Via Fisher Scoring, the general update equation in matrix form is written as
?1
z ? z + L? DL
L? DG(x ? x
?)
(6)
Although this formulation is somewhat abstract, it covers a very broad range of model classes that
are used in practice. For example, an important special case appears when the variance functions
are in the form of v(?
x) = x
?p . By setting p = {0, 1, 2, 3} these correspond to Gaussian, Poisson,
Exponential/Gamma, and Inverse Gaussian distributions [10, pp.30], which are special cases of the
exponential family of distributions for any p named Tweedie?s family [11]. Those for p = {0, 1, 2},
in turn, correspond to EU, KL and IS cost functions often used for NMF decompositions [12, 7].
2
2.1
Tensor Factorisations (TF) as GLM?s
The key observation for expressing a TF model as a GLM is to identify the multilinear structure
and using an alternating optimization approach. To hide the notational complexity, we will give an
example with a simple matrix factorisation model; extension to tensors will require heavier notation,
but are otherwise conceptually straightforward. Consider a MF model
X i,r j,r
? = Z1 Z2
? i,j =
g(X)
in scalar
g(X)
Z1 Z2
(7)
r
? are matrices of compatible sizes. Indeed, by applying the vec operator
where Z1 , Z2 and g(X)
(vectorization, stacking columns of a matrix to obtain a vector) to both sides of (7) we obtain two
equivalent representation of the same system
? = (I|j| ? Z1 ) vec(Z2 ) =
vec(g(X))
?
?g(X)
?(Z1 Z2 )
~2
vec(Z2 ) =
vec(Z2 ) ? ?2 Z
?Z2
?Z2
(8)
where I|j| denotes the |j| ? |j| identity matrix, ? denotes the Kronecker product [13], and vec Z ?
~ Clearly, this is a GLM where ?2 plays the role of a model matrix and Z
~ 2 is the parameter
Z.
vector. By alternating between Z1 and Z2 , we can maximise the log-likelihood iteratively; indeed
this alternating maximisation is standard for solving matrix factorisation problems. In the sequel, we
will show that a much broader range of algorithms can be readily derived in the GLM framework.
2.2
Generalised Tensor Factorisation
We define a tensor ? as a multiway array with an index set V = {i1 , i2 , . . . , i|?| } where each index
in for n = 1 . . . |?| runs as in = 1 . . . |in |. An element of the tensor ? is a scalar that we denote
by ?(i1 , i2 , . . . , i|?| ) or ?i1 ,i2 ,...,i|?| or as a shorthand notation by ?(v) with v being a particular
configuration. |v| denotes number of all distinct configurations for V, and e.g. if V = {i1 , i2 } then
|v| = |i1 ||i2 |. We call the form ?(v) as element-wise; the notation [ ] yields a tensor by enumerating
all the indices, i.e., ? = [?i1 ,i2 ,...,i|?| ] or ? = [?(v)]. For any two tensors X and Y of compatible
order, X ? Y is an element-wise multiplication and if not explicitly stressed X/Y is an element-wise
division. 1 is an object of all ones whose order depends on the context where it is used.
A generalised tensor factorisation problem is specified by an observed tensor X (with possibly
missing entries, to be treated later) and a collection of latent tensors to be estimated, Z1:|?| = {Z? }
for ? = 1 . . . |?|, and by an exponential family of form (2). The index set of X is denoted by V0 and
S|?|
the index set of each Z? by V? . The set of all model indices is V = ?=1 V? . We use v? (or v0 )
to denote a particular configuration of the indices for Z? (or X) while v?? denoting a configuration
of the compliment V?? = V/V? . The goal is to find the latent Z? that maximize the likelihood
? is given via
p(X|Z1:? ) where hXi = X
XY
? 0 )) =
g(X(v
Z? (v? )
(9)
v
?0
?
? j, k) =
To clarify our notation with an example, we express the CP (Parafac) model, defined as X(i,
P
?
?
Z
(i,
r)Z
(j,
r)Z
(k,
r).
In
our
notation,
we
take
identity
link
g(
X)
=
X
and
the
index sets
2
3
r 1
with V = {i, j, k, r}, V0 = {i, j, k}, V?0 = {r}, V1 = {i, r}, V2 = {j, r} and V3 = {k, r}. Our
notation deliberately follows that of graphical models; the reader might find it useful to associate
indices with discrete random variables and factors with probability tables [14]. Obviously, while a
TF model does not represent a discrete probability measure, the algebraic structure is nevertheless
analogous.
To extend the discussion in Section 2.1 to the tensor case, we need the equivalent of the model
matrix, when updating Z? . This is obtained by summing over the product of all remaining factors
X Y
X
X
? 0 )) =
Z? (v? )
Z? (v? )L? (o? )
g(X(v
Z?? (v?? ) =
v
?0 ?v?
L? (o? ) =
X Y
v
?0 ??
v? ?? 6=?
v
?0 ?v?
with o? ? (v0 ? v? ) ? (?
v0 ? v?? )
Z?? (v?? )
v
?0 ??
v? ?? 6=?
3
? wrt the latent tensor Z? denoted as
One related quantity to L? is the derivative of the tensor g(X)
?? and is defined as (following the convention [13, pp 196])
?
?g(X)
= I|v0 ?v? | ? L?
with L? ? R|v0 ??v? |?|?v0 ?v? |
(10)
?? =
?Z?
The importance of L? is that, all the update rules can be formulated by a product and subsequent
contraction of L? with another tensor Q having exactly the same index set of the observed tensor
X. As a notational abstraction, it is useful to formulate the following function,
Definition 1. The tensor valued function ?? (Q) : R|v0 | ? R|v? | is defined as
i
h X
Q(v0 ) L? (o? )?
??? (Q) =
(11)
v0 ??
v?
with ?? (Q) being an object of the same order as Z? and o? ? (v0 ? v? ) ? (?
v0 ? v?? ). Here, on
the right side, the nonnegative integer ? denotes the element-wise power, not to be confused with an
index. On the left, it should be interpreted as a parameter of the ? function. Arguably, ? function
abstracts away all the tedious reshape and unfolding operations [5]. This abstraction has also an
important practical facet: the computation of ? is algebraically (almost) equivalent to computation
of marginal quantities on a factor graph, for which efficient message passing algorithms exist [14].
? i,j,k = P
Example 1. TUCKER3 is defined as X
Ai,p B j,q C k,r Gp,q,r with V =
p,q,r
{i, j, k, p, q, r}, V0 = {i, j, k}, VA = {i, p}, VB = {j, q}, VC = {k, r}, VG = {p, q, r}. Then
for the first factor A, the objects LA and ??A () are computed as follows
"
#
i h
h
X
p i
j,q k,r p,q,r
LA =
B C G
= ((C ? B)G? )pk,j = LA k,j
(12)
q,r
?
??A (Q) = ?
X
? p
Qk,j
LA
i
j,k
k,j
?
?=
QL?A
p
i
(13)
The index sets marginalised out for LA and ?A are V?0 ? V?A = {p, q, r} ? {j, q, k, r} = {q, r} and
V0 ? V?A = {i, j, k} ? {j, q, k, r} = {j, k}. Also we verify the order of the gradient ?A (10) as
Iii ? LA pk,j = ?i,p
i,k,j that conforms the matrix derivation convention [13, pp.196].
2.3
Iterative Solution for GTF
As we have now established a one to one relationship between GLM and GTF objects such as the
? the model matrix L ? L?
observation x ? vec X, the mean (and the model estimate) x
? ? vec X,
and the parameter vector z ? vec Z? , we can write directly from (6) as
?1
?
?g(X)
~?
~
~? ? Z
~ ? + ??
with ?? =
??
Z
(14)
? DG(X ? X)
? D??
?Z?
There are at least two ways that this update can further simplified. We may assume an identity
link function, or alternatively we may choose a matching link and lost functions such that they
? =X
? that results to
cancel each other smoothly [2]. In the sequel we consider identity link g(X)
?
gX? (X) = 1. This implies G to be identity, i.e. G = I. We define a tensor W , that plays the same
?
role as w in (5), which becomes simply the precision (inverse variance function), i.e. W = 1/v(X)
where for the Gaussian, Poisson, Exponential and Inverse Gaussian distributions we have simply
? ?p with p = {0, 1, 2, 3} [10, pp 30]. Then, the update (14) is reduced to
W =X
?1
~?
~
~? ? Z
~ ? + ??
(15)
??
Z
? D(X ? X)
? D??
After this simplification we obtain two update rules for GTF for non-negative and real data.
The update (15) can be used to derive multiplicative update rules (MUR) popularised by [15] for the
nonnegative matrix factorisation (NMF). MUR equations ensure the non-negative parameter updates
as long as starting some non-negative initial values.
4
Theorem 1. The update equation (15) for nonnegative GTF is reduced to multiplicative form as
Z? ? Z? ?
?? (W ? X)
?
?? (W ? X)
s.t. Z? (v? ) > 0
(16)
(Proof sketch) Due to space limitation we leave the full details of the proof, but idea is that inverse
of H = ?? D? is identified as step size and by use of the results of the Perron-Frobenious theorem
[16, pp 125] we further bound it as
~ ? (v? )
~?
~?
HZ
Z
2Z
2
?=
since ?max (H) ? max
(17)
<
?
v?
~?
~?
?max (?? D?)
Z? (v? )
?? DX
?? DX
For the special case of the Tweedie family where the precision is a function of the mean as W =
? ?p for p = {0, 1, 2, 3} the update (15) is reduced to
X
Z? ? Z? ?
? ?p ? X)
?? (X
? 1?p )
?? (X
(18)
? = Z1 Z2 , ?2 is ?2 (Q) = Z ? Q. Then for the
For example, to update Z2 for the NMF model X
1
?
? ?
Gaussian (p = 0) this reduces to NMF-EU as Z2 ?
Z
?
(Z
X)/(Z
X).
For
the
Poisson (p = 1)
2
1
1
? / Z ? 1 [15].
it reduces to NMF-KL as Z2 ? Z2 ? Z1? (X/X)
1
By dropping the non-negativity requirement we obtain the following update equation:
Theorem 2. The update equation for GTF with real data can be expressed as
Z? ? Z? +
?
2 ?? (W ? (X ? X))
??/0
?2? (W )
with ??/0 = |v? ? v?0 |
(19)
(Proof sketch) Again skipping the full details, as part of the proof we set Z? = 1 in (17) specifically,
2
and replacing matrix multiplication of ?? D?1 by ?? D1??/0 completes the proof. Here the
multiplier ??/0 is the cardinality arising from the fact that only ??/0 elements are non-zero in a row
of ?? D?. Note the example for ??/0 that if V? ? V?0 = {p, q} then ??/0 = |p||q| which is number
of all distinct configurations for the index set {p, q}.
Missing data can be handled easily by dropping the missing data terms from the likelihood [17].
P The
net effect of this is the addition of an indicator variable mi to the gradient ?L/?z = ? ?2 i (xi ?
x
?i )mi wi gx? (?
xi )li? with mi = 1 if xi is observed otherwise mi = 0. Hence we simply define a mask
tensor M having the same order as the observation X, where the element M (v0 ) is 1 if X(v0 ) is
observed and zero otherwise. In the update equations, we merely replace W with W ? M .
3
Coupled Tensor Factorization
Here we address the problem when multiple observed tensors X? for ? = 1 . . . |?| are factorised
simultaneously. Each observed tensor X? now has a corresponding index set V0,? and a particular
configuration will be denoted by v0,? ? u? . Next, we define a |?| ? |?| coupling matrix R where
XY
?,?
1
X? and Z? connected
? ? (u? ) =
R?,? =
X
Z? (v? )R
(20)
0
otherwise
u
??
?
For the coupled factorisation, we get the following expression as the derivative of the log likelihood
X
X
?
?L
? ? (u? ) W? (u? ) ? X? (u? )
=
(21)
R?,?
X? (u? ) ? X
?Z? (v? )
?Z? (v? )
?
u ??
v
?
?
? ? (u? )) are the precisions. Then proceeding as in section 2.3 (i.e. getting the
where W? ? W (X
Hessian and finding Fisher Information) we arrive at the update rule in vector form as
X
?1 X
~?
~? ? Z
~? +
~
Z
(22)
R?,? ??
R?,? ??
?,? D? ??,?
?,? D? X? ? X?
?
?
5
Z1
...
Z?
. . . Z|?|
X1
...
X?
. . . X|?|
A
B
C
D
X1
X2
X3
E
Figure 1: (Left) Coupled factorisation structure where the arrow indicates the existence of the influence of latent tensor Z? onto the observed tensor X? . (Right) The CP/MF/MF coupled factorisation
problem in 1.
? ? )/?Z? . The update equations for the coupled case are quite intuitive; we
where ??,? = ?g(X
calculate the ??,? functions defined as
Y
h X
i
?,? ?
Q(u? )
???,? (Q) =
(23)
Z?? (v?? )R
u? ??
v?
?? 6=?
for each submodel and add the results:
Lemma 1. Update for non-negative CTF
R?,? ??,? (W? ? X? )
?,? ?
?
?,? W? ? X?
?R
P
Z? ? Z? ? P ?
(24)
? ?p
In the special case of aTweedie family, i.e. for the
distributions
whose precision
as W? = X? , the
P
P ?,?
1?p
?,?
?p
?
? ? X? /
.
??,? X
update is Z? ? Z? ?
??,? X
?
?
?R
?R
Lemma 2. General update for CTF
Z? ? Z? +
2
??/0
P
?,?
??
R
?
W
?
X
?
X
?
?
?,?
?
P ?,? 2
??,? (W? )
?R
(25)
? ??p and get the related formula.
For the special case of the Tweedie family we plug W? = X
4
Experiments
Here we want to solve the CTF problem introduced (1), which is a coupled CP/MF/MF problem
X
X
X
? i,j,k =
? j,p =
? j,q =
X
Ai,r B j,r C k,r
X
B j,r Dp,r
X
B j,r E q,r
(26)
1
2
3
r
r
r
where we employ the symbols A : E for the latent tensors instead of Z? . This factorisation problem
has the following R matrix with |?| = 5, |?| = 3
#
"
? 1 = P A1 B 1 C 1 D0 E 0
X
1 1 1 0 0
? 2 = P A0 B 1 C 0 D1 E 0
(27)
with X
R= 0 1 0 1 0
0 1 0 0 1
? 3 = P A0 B 1 C 0 D0 E 1
X
We want to use the general update equation (25). This requires derivation of ???,? () for ? = 1 (CP)
and ? = 2 (MF) but not for ? = 3 since that ??,3 () has the same shape as ??,2 (). Here we show
the computation for B, i.e. for Z2 , which is the common factor
"
#
?
X
?
i,j,k
i,r k,r
?B,1 (Q) =
Q
= Q(1) (C ? ? A? )
(28)
A C
ik
??B,2 (Q)
=
"
X
p
Q
j,p
D
p,r ?
6
#
= QD?
(29)
with Q(n) being mode-n unfolding operation that turns a tensor into matrix form [5]. In addition,
for ? = 1 the required scalar value ?B/0 is |r| here since VB ? V?0 = {j, r} ? {r} = {r} noting that
value ?B/0 is the same for ? = 2, 3. The simulated data size for observables is |i| = |j| = |k| =
|p| = |q| = 30 while the latent dimension is |r| = 5. The number of iterations is 1000 with the
Euclidean cost while the experiment produced similar results for KL cost as shown in Figure 2.
A
B
C
10
5
0
0
5
D
10
10
0
0
5
E
10
10
5
0
5
5
5
10
0
5
10
Orginal
Initial
5
0
0
Final
0
5
10
Figure 2: The figure compares the original, the initial (start up) and the final (estimate) factors for
Z? = A, B, C, D, E. Only the first column, i.e. Z? (1 : 10, 1) is plotted. Note that CP factorisation
is unique up to permutation and scaling [5] while MF factorisation is not unique, but when coupled
with CP it recovers the original data as shown in the figure. For visualisation, to find the correct
permutation, for each of Z? the matching permutation between the original and estimate are found
by solving an orthogonal Procrustes problem [18, pp 601].
4.1
Audio Experiments
In this section, we illustrate a real data application of our approach, where we reconstruct missing
parts of an audio spectrogram X(f, t), that represents the STFT coefficient magnitude at frequency
bin f and time frame t of a piano piece, see top left panel of Fig.3. This is a difficult matrix
completion problem: as entire time frames (columns of X) are missing, low rank reconstruction
techniques are likely to be ineffective. Yet such missing data patterns arise often in practice, e.g.,
when packets are dropped during digital communication. We will develop here a novel approach,
expressed as a coupled TF model. In particular, the reconstruction will be aided by an approximate
musical score, not necessarily belonging to the played piece, and spectra of isolated piano sounds.
Pioneering work of [19] has demonstrated that, when a audio spectrogram of music is decomposed
? t) = P D(f, i)E(i, t), the computed factors D and E tend to be
using NMF as X1 (f, t) ? X(f,
i
semantically meaningful and correlate well with the intuitive notion of spectral templates (harmonic
profiles of musical notes) and a musical score (reminiscent of a piano roll representation such as a
MIDI file). However, as time frames are modeled conditionally independently, it is impossible to
reconstruct audio with this model when entire time frames are missing.
In order to restore the missing parts in the audio, we form a model that can incorporates musical
information of chords structures and how they evolve in time. In order to achieve this, we hierarchically decompose
the excitation matrix E as a convolution of some basis matrices and their weights:
P
E(i, t) = k,? B(i, ?, k)C(k, t ? ? ). Here the basis tensor B encapsulates both vertical and temporal information of the notes that are likely to be used in a musical piece; the musical piece to
be reconstructed will share B, possibly played at different times or tempi as modelled by G. After
replacing E with the decomposed version, we get the following model (eq 30):
X
? 1 (f, t) =
X
D(f, i)B(i, ?, k)C(k, d)Z(d, t, ? )
Test file
(30)
i,?,k,d
? 2 (i, n) =
X
X
B(i, ?, k)G(k, m)Y (m, n, ? )
MIDI file
(31)
Merged training files
(32)
?,k,m
? 3 (f, p) =
X
X
D(f, i)F (i, p)T (i, p)
i
7
Here we have introduced new dummy indices d and m, and new (fixed) factors Z(d, t, ? ) = ?(d ?
t + ? ) and Y (m, n, ? ) = ?(m ? n + ? ) to express this model in our framework. In eq 32, while
forming X3 we concatenate isolated recordings corresponding to different notes. Besides, T is a
0 ? 1 matrix, where T (i, p) = 1(0) if the note i is played (not played) during the time frame p and
F models the time varying amplitudes of the training data. R matrix for this model is defined as
R=
"
1 1
0 1
1 0
1 1
0 0
0 0
0 0 0
1 1 0
0 0 1
0
0
1
? 1 = P D1 B 1 C 1 Z 1 G0 Y 0 F 0 T 0
X
? 2 = P D0 B 1 C 0 Z 0 G1 Y 1 F 0 T 0
with X
? 3 = P D1 B 0 C 0 Z 0 G0 Y 0 F 1 T 1
X
#
(33)
? ?1 ) on a 30 second piano
Figure 3 illustrates the performance the model, using KL cost (W = X
recording where the 70% of the data is missing; we get about 5dB SNR improvement, gracefully
degrading from 10% to 80% missing data: the results are encouraging as quite long portions of audio
are missing, see bottom right panel of Fig.3.
X2 (Transcription Data)
1500
60
1500
1000
0
Frequency (Hz)
2000
500
40
5
10
15
Time (sec)
20
25
50
100
1500
1000
500
15
Time (sec)
20
25
1000
0
200
Time (sec)
300
Reconst. SNR
Initial SNR
15
10
5
500
10
100
Performance
SNR (dB)
1500
Frequency (Hz)
2000
5
0
150
Time (sec)
Ground Truth
2000
0
1000
500
20
X1hat (Restored)
Frequency (Hz)
X3 (Isolated Recordings)
80
Notes
Frequency (Hz)
X1
2000
5
10
15
Time (sec)
20
25
0
20
40
60
Missing Data Percentage (%)
80
Figure 3: Top row, left to right: Observed matrices X1 : spectrum of the piano performance, darker
colors imply higher magnitude (missing data (70%) are shown white), X2 , a piano roll obtained
from a musical score of the piece, X3 , spectra of 88 isolated notes from a piano. Bottom Row:
Reconstructed X1 , the ground truth, and the SNR results with increasing missing data. Here, initial
SNR is computed by substituting 0 as missing values.
5
Discussion
This paper establishes a link between GLMs and TFs and provides a general solution for the computation of arbitrary coupled TFs, using message passing primitives. The current treatment focused on
ML estimation; as immediate future work, the probabilistic interpretation is to be extended to a full
Bayesian inference with appropriate priors and inference methods. A powerful aspect, which we
have not been able to summarize here is assigning different cost functions, i.e. distributions, to different observation tensors in a coupled factorization model. This requires only minor modifications
to the update equations. We believe that, as a whole, the GCTF framework covers a broad range
of models that can be useful in many different application areas beyond audio processing, such as
network analysis, bioinformatics or collaborative filtering.
? ?ITAK grant number 110E292, Bayesian
Acknowledgements: This work is funded by the TUB
matrix and tensor factorisations (BAYTEN) and Bo?gazic?i University research fund BAP5723. Umut
? ?ITAK. We also would like to thank to
S?ims?ekli is also supported by a Ph.D. scholarship from TUB
Evrim Acar for the fruitful discussions.
8
References
[1] A. T. Cemgil, Bayesian inference for nonnegative matrix factorisation models, Computational Intelligence
and Neuroscience 2009 (2009) 1?17.
[2] A. P. Singh, G. J. Gordon, A unified view of matrix factorization models, in: ECML PKDD?08, Part II,
no. 5212, Springer, 2008, pp. 358?373.
[3] E. Acar, T. G. Kolda, D. M. Dunlavy, All-at-once optimization for coupled matrix and tensor factorizations, CoRR abs/1105.3422. arXiv:1105.3422.
[4] Q. Xu, E. W. Xiang, Q. Yang, Protein-protein interaction prediction via collective matrix factorization,
in: In Proc. of the IEEE International Conference on BIBM, 2010, pp. 62?67.
[5] T. G. Kolda, B. W. Bader, Tensor decompositions and applications, SIAM Review 51 (3) (2009) 455?500.
[6] Y. K. Y?lmaz, A. T. Cemgil, Probabilistic latent tensor factorization, in: Proceedings of the 9th international conference on Latent variable analysis and signal separation, LVA/ICA?10, Springer-Verlag, 2010,
pp. 346?353.
[7] C. Fevotte, A. T. Cemgil, Nonnegative matrix factorisations as probabilistic inference in composite models, in: Proc. 17th EUSIPCO, 2009.
[8] Y. K. Y?lmaz, A. T. Cemgil, Algorithms for probabilistic latent tensor factorization, Signal Processing(2011),doi:10.1016/j.sigpro.2011.09.033.
[9] C. E. McCulloch, S. R. Searle, Generalized, Linear, and Mixed Models, Wiley, 2001.
[10] C. E. McCulloch, J. A. Nelder, Generalized Linear Models, 2nd Edition, Chapman and Hall, 1989.
[11] R. Kaas, Compound poisson distributions and glm?s, tweedie?s distribution, Tech. rep., Lecture, Royal
Flemish Academy of Belgium for Science and the Arts, (2005).
[12] A. Cichocki, R. Zdunek, A. H. Phan, S. Amari, Nonnegative Matrix and Tensor Factorization, Wiley,
2009.
[13] J. R. Magnus, H. Neudecker, Matrix Differential Calculus with Applications in Statistics and Econometrics, 3rd Edition, Wiley, 2007.
[14] M. Wainwright, M. I. Jordan, Graphical models, exponential families, and variational inference, Foundations and Trends in Machine Learning 1 (2008) 1?305.
[15] D. D. Lee, H. S. Seung, Algorithms for non-negative matrix factorization, in: NIPS, Vol. 13, 2001, pp.
556?562.
[16] M. Marcus, H. Minc, A Survey of Matrix Theory and Matrix Inequalities, Dover, 1992.
[17] R. Salakhutdinov, A. Mnih, Probabilistic matrix factorization, in: Advances in Neural Information Processing Systems, Vol. 20, 2008.
[18] G. H. Golub, C. F. V. Loan, Matrix computations, 3rd Edition, Johns Hopkins UP, 1996.
[19] P. Smaragdis, J. C. Brown, Non-negative matrix factorization for polyphonic music transcription, in:
WASPAA, 2003, pp. 177?180.
9
| 4480 |@word version:1 briefly:1 nd:1 tedious:1 calculus:1 decomposition:2 contraction:1 tr:2 searle:1 initial:5 configuration:6 score:4 denoting:1 ours:1 current:2 com:1 z2:21 skipping:1 yet:1 dx:2 written:1 readily:1 reminiscent:1 assigning:1 john:1 subsequent:1 partition:1 concatenate:1 shape:1 enables:1 acar:2 update:23 fund:1 polyphonic:1 generative:1 intelligence:1 ith:1 dover:1 provides:2 gx:5 popularised:1 differential:1 ik:1 shorthand:1 mask:1 indeed:3 expected:1 ica:2 pkdd:1 multi:1 bibm:1 inspired:1 salakhutdinov:1 decomposed:2 automatically:1 encouraging:1 cardinality:1 increasing:1 becomes:1 confused:1 notation:8 linearity:1 panel:2 mcculloch:2 interpreted:1 degrading:1 developed:1 unified:1 finding:1 x3j:1 temporal:1 exactly:1 dunlavy:1 grant:1 appear:2 arguably:1 orginal:1 generalised:8 dropped:1 before:1 engineering:1 understood:2 cemgil:6 flemish:1 eusipco:1 maximise:1 might:1 factorization:13 limited:1 range:3 practical:2 unique:2 practice:2 maximisation:1 lost:1 x3:5 procedure:3 area:1 composite:1 convenient:1 matching:3 protein:2 get:4 onto:1 selection:1 operator:1 context:1 applying:1 influence:1 impossible:1 fruitful:2 equivalent:3 demonstrated:2 missing:15 straightforward:1 primitive:1 starting:1 independently:2 lva:1 focused:2 formulate:1 survey:1 factorisation:30 rule:4 array:6 submodel:1 notion:1 analogous:1 kolda:2 play:2 associate:1 element:8 trend:1 updating:1 econometrics:1 observed:11 role:2 bottom:2 calculate:1 connected:1 eu:2 chord:1 mentioned:1 complexity:1 seung:1 tfs:2 singh:2 solving:2 upon:1 division:1 observables:1 basis:2 easily:4 joint:1 derivation:2 distinct:2 doi:1 gctf:1 whose:2 quite:2 valued:1 solve:1 otherwise:4 reconstruct:2 amari:1 statistic:2 g1:1 gp:1 final:2 obviously:1 net:1 reconstruction:2 interaction:2 product:3 achieve:1 academy:1 intuitive:2 getting:1 requirement:1 extending:1 leave:1 object:6 derive:2 illustrate:2 coupling:2 completion:1 develop:1 minor:1 gtf:7 eq:2 implies:2 convention:2 qd:1 closely:1 correct:1 merged:1 bader:1 vc:1 packet:1 bin:1 require:1 decompose:1 multilinear:1 extension:1 clarify:1 hall:1 ground:2 magnus:1 taylan:2 exp:1 lm:1 substituting:1 belgium:1 estimation:2 proc:2 tf:5 establishes:1 unfolding:2 clearly:1 gaussian:5 varying:1 ekli:2 broader:1 minc:1 minimisation:1 derived:4 parafac:3 notational:2 improvement:1 modelling:1 likelihood:6 indicates:1 rank:1 tech:1 sense:2 posteriori:1 inference:6 abstraction:2 istanbul:1 typically:1 a0:2 entire:2 visualisation:1 i1:6 denoted:3 development:1 art:1 special:8 marginal:1 once:2 construct:1 having:3 chapman:1 represents:1 broad:3 cancel:1 future:1 gordon:2 employ:1 dg:3 simultaneously:3 gamma:1 divergence:3 ab:1 interest:1 message:4 highly:1 mnih:1 custom:1 golub:1 bregman:1 necessary:1 xy:2 conforms:1 tweedie:6 orthogonal:1 euclidean:2 plotted:1 isolated:4 column:3 facet:1 cover:2 restoration:1 cost:7 stacking:1 entry:1 snr:6 motivating:1 synthetic:1 combined:1 density:1 international:2 siam:1 sequel:2 probabilistic:8 lee:1 hopkins:1 again:1 choose:1 possibly:2 derivative:3 li:7 factorised:3 sec:5 coefficient:1 explicitly:1 depends:1 piece:6 multiplicative:3 view:2 later:1 kaas:1 portion:1 start:1 collaborative:1 square:1 roll:2 musical:8 variance:4 qk:1 correspond:2 identify:1 yield:1 conceptually:1 modelled:1 bayesian:4 produced:1 none:1 definition:2 waspaa:1 pp:11 tucker:1 frequency:5 associated:1 proof:5 mi:4 recovers:1 treatment:2 color:1 amplitude:1 appears:1 higher:1 awkward:1 formulation:1 just:1 glms:2 sketch:2 replacing:2 logistic:1 mode:1 believe:1 building:1 effect:1 verify:1 multiplier:1 brown:1 deliberately:1 hence:1 alternating:4 iteratively:1 semantic:1 i2:7 white:1 conditionally:2 fevotte:1 during:2 excitation:1 generalized:2 cp:9 wise:4 harmonic:1 novel:2 recently:1 variational:1 common:1 cmf:1 extend:1 interpretation:3 ims:2 expressing:1 ctf:3 vec:9 ai:2 compliment:1 rd:2 stft:1 z4:1 itak:2 multiway:2 funded:1 hxi:3 v0:19 add:1 multivariate:1 hide:1 compound:1 verlag:1 inequality:1 rep:1 scoring:3 somewhat:1 spectrogram:2 algebraically:1 maximize:1 v3:1 signal:2 ii:1 multiple:2 full:4 sound:1 turkey:1 reduces:2 d0:3 evrim:1 plug:1 long:2 a1:1 va:1 z5:1 prediction:1 regression:1 expectation:1 poisson:4 arxiv:1 iteration:2 normalization:1 represent:1 achieved:1 addition:3 want:2 completes:1 source:1 ineffective:1 file:4 hz:5 tend:1 recording:3 db:2 incorporates:1 jordan:1 call:2 extracting:1 integer:1 noting:1 yang:1 iii:1 easy:1 identified:1 idea:1 tub:2 enumerating:1 expression:1 heavier:1 handled:1 algebraic:1 passing:4 hessian:1 useful:3 procrustes:1 ph:1 reduced:3 exist:1 lsi:1 canonical:2 percentage:1 estimated:3 arising:1 dummy:1 neuroscience:1 discrete:2 write:1 dropping:2 vol:2 express:2 key:1 nevertheless:1 drawn:1 v1:1 graph:1 merely:1 run:1 inverse:4 powerful:1 named:1 arrive:1 family:9 reader:1 almost:1 frobenious:1 separation:2 prefer:1 scaling:1 vb:2 bound:1 simplification:1 played:4 smaragdis:1 nonnegative:7 kronecker:1 x2:4 simsekli:1 neudecker:1 aspect:1 department:1 structured:2 according:1 belonging:1 wi:4 encapsulates:1 modification:1 restricted:1 indexing:1 glm:11 equation:9 turn:2 allat:1 wrt:1 operation:2 v2:1 away:1 reshape:1 spectral:1 appropriate:1 tempo:1 alternative:1 existence:1 original:4 assumes:6 clustering:1 denotes:5 remaining:1 ensure:1 graphical:3 top:2 ppi:1 newton:1 music:2 giving:1 scholarship:1 boun:1 classical:1 tensor:44 g0:2 quantity:2 reconst:1 restored:1 diagonal:1 gradient:3 dp:1 link:10 thank:1 simulated:1 gracefully:1 enforcing:1 marcus:1 assuming:1 besides:1 index:16 relationship:2 z3:1 modeled:1 regulation:1 ql:1 difficult:1 negative:8 collective:2 vertical:1 observation:6 dispersion:1 datasets:1 convolution:1 ecml:1 immediate:1 extended:3 relational:1 communication:1 frame:5 arbitrary:4 nmf:7 introduced:2 perron:1 kl:5 specified:1 z1:12 required:1 established:3 nip:1 address:3 able:1 beyond:1 pattern:1 summarize:1 pioneering:1 including:1 max:3 royal:1 wainwright:1 power:1 natural:2 treated:1 restore:1 indicator:1 marginalised:1 normality:1 imply:1 negativity:1 coupled:16 cichocki:1 prior:2 review:2 piano:7 acknowledgement:1 multiplication:2 evolve:1 xiang:1 loss:1 lecture:1 permutation:3 mixed:1 limitation:1 filtering:1 var:2 vg:1 digital:1 foundation:1 share:1 row:4 compatible:2 supported:1 transpose:1 formal:1 side:2 generalise:1 mur:2 template:1 dimension:2 kenan:2 author:1 collection:1 simplified:1 lz:1 correlate:1 reconstructed:2 approximate:1 midi:2 proteinprotein:1 gene:1 umut:3 transcription:2 ml:1 generalising:1 summing:1 nelder:1 xi:25 alternatively:1 spectrum:3 latent:12 vectorization:1 iterative:1 table:1 necessarily:1 diag:2 pk:2 main:1 hierarchically:1 arrow:1 bounding:1 motivation:1 arise:1 profile:1 whole:1 edition:3 x1:7 xu:1 fig:2 darker:1 wiley:3 precision:4 exponential:7 x1i:1 third:1 theorem:3 formula:1 gazic:2 specific:1 symbol:1 zdunek:1 dl:2 corr:1 importance:1 magnitude:2 illustrates:1 phan:1 mf:15 smoothly:1 simply:3 likely:2 forming:1 expressed:3 partially:1 bo:2 scalar:3 springer:2 ch:1 truth:2 identity:6 goal:2 formulated:1 shared:1 fisher:6 replace:1 aided:1 loan:1 generalisation:1 specifically:1 semantically:2 lemma:2 x2j:1 called:1 experimental:1 gauss:1 la:6 meaningful:3 stressed:1 bioinformatics:1 audio:8 d1:4 |
3,845 | 4,481 | Portmanteau Vocabularies for Multi-Cue Image
Representation
Fahad Shahbaz Khan1 , Joost van de Weijer1 , Andrew D. Bagdanov1,2 , Maria Vanrell1
1
Centre de Visio per Computador, Computer Science Department
1
Universitat Autonoma de Barcelona, Edifci O, Campus UAB (Bellaterra), Barcelona, Spain
2
Media Integration and Communication Center, University of Florence, Italy
Abstract
We describe a novel technique for feature combination in the bag-of-words model
of image classification. Our approach builds discriminative compound words from
primitive cues learned independently from training images. Our main observation
is that modeling joint-cue distributions independently is more statistically robust
for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary
compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau1 words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-theart results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets
demonstrate the effectiveness of our technique compared to other, significantly
more complex approaches to multi-cue image representation.
1
Introduction
Image categorization is the task of classifying an image as containing an objects from a predefined
list of categories. One of the most successful approaches to this problem is the bag-of-words (BOW)
[4, 15, 11, 2]. In the bag-of-words model an image is first represented by a collection of local image
features detected either sparsely or in a regular, dense grid. Each local feature is then represented
by one or more cues, each describing one aspect of a small region around the corresponding feature.
Typical local cues include color, shape, and texture. These cues are then quantized into visual words
and the final image representation is a histogram over these visual vocabularies. In the final stage of
the BOW approach the histogram representations are sent to a classifier.
The success of BOW is highly dependent on the quality of the visual vocabulary. In this paper we
investigate visual vocabularies which are used to represent images whose local features are described
by both shape and color. To extend BOW to multiple cues, two properties are especially important:
cue binding and cue weighting. A visual vocabulary is said to have the binding property when two
independent cues appearing at the same location in an image remain coupled in the final image
representation. For example, if every local patch in an image is independently described by a shape
word and a color word, in the final image representation using compound words the binding property
ensures that shape and color words coming from the same feature location are coupled in the final
representation. The term binding is borrowed from the neuroscience field where it is used to describe
the way in which humans select and integrate the separate cues of objects in the correct combinations
in order to accurately recognize them [17]. The property of cue weighting implies that it is possible
1
A portmanteau is a combination of two or more words to form a neologism that communicates a concept
better than any individual word (e.g. Ski resort + Konference = Skonference). We use the term to describe our
vocabularies to emphasize the connotation with combining color and shape words into new, more meaningful
representations.
1
to adapt the relevance of each cue depending on the dataset. The importance of cue weighting can
be seen from the success of Multiple Kernel Learning (MKL) techniques where weights for each
cue are automatically learned [3, 13, 21, 14, 1, 20].
Traditionally, two approaches exist to handle multiple cues in BOW. When each cue has its own
visual vocabulary the result is known as a late fusion image representation in which an image is
represented as one histogram over shape-words and another histogram over color-words. Such a
representation does not have the cue binding property, meaning that it is impossible to know exactly
which color-shape events co-occurred at local features. Late fusion does, however, allow cue weighting. Another approach, called early fusion, constructs a single visual vocabulary of joint color-shape
words. Representations over early fusion vocabularies have the cue binding property, meaning that
the spatial co-occurrence of shape and color events is preserved. However, cue weighting in early
fusion vocabularies is very cumbersome since must be performed before vocabulary construction
making cross-validation very expensive. Recently, Khan et al. [10] proposed a method which combines cue binding and weighting. However, their final image representation size is equal to number
of vocabulary words times the number of classes, and is therefore not feasible for the large data sets
considered in this paper.
A straightforward, if combinatorially inconvenient, approach to ensuring the binding property is to
create a new vocabulary that contains one word for each combination of original shape and color
feature. Considering that each of the original shape and color vocabularies may contain thousands of
words, the resulting joint vocabulary may contain millions. Such large vocabularies are impractical
as estimating joint color-shape statistics is often infeasible due to the difficulty of sampling from
limited training data. Furthermore, with so many parameters the resulting classifiers are prone to
overfitting. Because of this and other problems, this type of joint feature representation has not been
further pursued as a way of ensuring that image representations have the binding property.
In recent years a number of vocabulary compression techniques have appeared that derive small,
discriminative vocabularies from very large ones [16, 7, 5]. Most of these techniques are based on
information theoretic clustering algorithms that attempt to combine words that are equivalently discriminative for the set of object categories being considered. Because these techniques are guided by
the discriminative power of clusters of visual words, estimates of class-conditional visual word probabilities are essential. These recent developments in vocabulary compression allow us to reconsider
the direct, Cartesian product approach to building compound vocabularies.
These vocabulary compression techniques have been demonstrated on single-cue vocabularies with
a few tens of thousands of words. Starting from even moderately sized shape and color vocabularies
results in a compound shape-color vocabulary an order of magnitude larger. In such cases, robust
estimates of the underlying class-conditional joint-cue distributions may be difficult to obtain. We
show that for typical datasets a strong independence assumption about the joint color-shape distribution leads to more robust estimates of the class-conditional distributions needed for vocabulary
compression. In addition, our estimation technique allows flexible cue-specific weighting that cannot be easily performed with other cue combination techniques that maintain the binding property.
2
Portmanteau vocabularies
In this section we propose a new multi-cue vocabulary construction method that results in compact vocabularies which possess both the cue binding and the cue weighting properties described
above. Our approach is to build portmanteau vocabularies of discriminative, compound shape and
color words chosen from independently learned color and shape lexicons. The term portmanteau
is used in natural language for words which are a blend of two other words and which combine
their meaning. We use the term portmanteau to describe these compound terms to emphasize the
fact that, similarly to the use of neologistic portmanteaux in natural language to capture complex
and compound concepts, we create groups of color and shape words to describe semantic concepts
inadequately described by shape or color alone.
A simple way to ensure the binding property is by considering a product vocabulary that contains
a new word for every combination of shape and color terms. Assume that S = {s1 , s2 , ..., sM }
and C = {c1 , c2 , ..., cN } represent the visual shape and color vocabularies, respectively. Then the
2
Flower-102
Bird-200
1,00E-05
1,60E-05
1,40E-05
8,00E-06
1,20E-05
1,00E-05
6,00E-06
8,00E-06
4,00E-06
6,00E-06
4,00E-06
2,00E-06
2,00E-06
0,00E+00
0,00E+00
0
2
4
6
8
Direct Empirical
10
12 14 16 18 20
Independence Assumption
22
0
2
4
6
Direct Empirical
8
10
12
14
16
Independence Assumption
Figure 1: Comparison of two estimates of the joint cue distribution p(S, C|R) on two large datasets.
The graphs plot the Jenson-Shannon divergence between each estimate and the true joint distribution
as a functions of the number of training images used to estimate them. The true joint distribution is
estimated empirically over all images in each dataset. Estimation using the independence assumption of equation (2) yields similar or better estimates than their empirical counterparts.
product vocabulary is given by
W = {w1 , w2 , ..., wT } = {{si , cj } | 1 ? i ? M, 1 ? j ? N },
where T = M ? N . We will also use the the notation sm to identify a member from the set S.
A disadvantage of vocabularies of compound terms constructed by considering the Cartesian product
of all primitive shape and color words is that the total number of visual words is equal to the number
of color words times the number of shape words, which typically results in hundreds of thousands of
elements in the final vocabulary. This is impractical for two reasons. First, the high dimensionality
of the representation hampers the use of complex classifiers such as SVMs. Second, insufficient
training data often renders robust estimation of parameters very difficult and the resulting classifiers
tend to overfit the training set. Because of these drawbacks, compound product vocabularies have,
to the best of our knowledge, not been pursued in literature. In the next two subsections we discuss
our approach to overcoming these two drawbacks.
2.1
Compact Portmanteau Vocabularies
In recent years, several algorithms for feature clustering have been proposed which compress large
vocabularies into small ones [16, 7, 5]. To reduce the high-dimensionality of the product vocabulary,
we apply Divisive Information-Theoretic feature Clustering (DITC) algorithm [5], which was shown
to outperform AIB [16]. Furthermore, DITC has also been successfully employed to construct
compact pyramid representations [6].
The DITC algorithm is designed to find a fixed number of clusters which minimize the loss in
mutual information between clusters and the class labels of training samples. In our algorithm, loss
in mutual information is measured between original product vocabulary and the resulting clusters.
The algorithm joins words which have similar discriminative power over the set of classes in the
image categorization problem. This is measured by the probability distributions p (R|wt ), where
R = {r1 , r2 , ..rL } is the set of L classes.
More precisely, the drop in mutual information I between the vocabulary W and the class labels
R when going from the original set of vocabulary words W to the clustered representation W R =
{W1 , W2 , ..., WJ } (where every Wj represents a cluster of words from W ) is equal to
J
X
X
I (R; W ) ? I R; W R =
p (wt ) KL (p (R|wt ) || p (R|Wj )),
(1)
j=1 wt ?Wj
where KL is the Kullback-Leibler divergence between two distributions. Equation (1) states that the
drop in mutual information is equal to the prior-weighted KL-divergence between a word and its
assigned cluster. The DITC algorithm minimizes this objective function by alternating computation
3
Figure 2: The effect of ? on DITC clusters. Each of the large boxes contains 100 image patches
sampled from one Portmanteau word on the Oxford Flower-102 dataset. Top row: five clusters
for ? = 0.1. Note how these clusters are relatively homogeneous in color, while shape varies
considerably within each. Middle row: five clusters sampled for ? = 0.5. The clusters show
consistency over both color and shape. Bottom row: five clusters sampled for ? = 0.9. Notice how
in this case shape is instead homogeneous within each cluster.
of the cluster distributions and assignment of compound visual words to their closest cluster. For
more details on the DITC algorithm we refer to Dhillon et al. [5]. Here we apply the DITC algorithm
to reduce the high-dimensionality of the compound vocabularies. We call the compact vocabulary
which is the output of the DITC algorithm the portmanteau vocabulary and its words accordingly
portmanteau words. The final image representation p(W R ) is a distribution over the portmanteau
words.
2.2
Joint distribution estimation
In solving the problem of high-dimensionality of the compound vocabularies we seemingly further complicated the estimation problem. As DITC is based on estimates of the class-conditional
distributions p(S, C|R) = p(W |R) over product vocabularies, we have increased the number of
parameters to be estimated to M ? N ? L. This can easily reach millions of parameters for standard
image datasets. To solve this problem we propose to estimate the class conditional distributions by
assuming independence of color and shape, given the class:
p(sm , cn |R) ? p(sm |R)p(cn |R).
(2)
Note that we do not assume independence of the cues themselves, but rather the less restrictive independence of the cues given the class. Instead of directly estimating the empirical joint distribution
p(S, C|R), we reduce the number of parameters to estimate to (M + N ) ? L, which in the vocabulary configurations discussed in this paper represents a reduction in complexity of two orders
of magnitude. As an additional advantage, we will show in section 2.3 that estimating the joint
distribution p(S, C|R) allows us to introduce cue weighting.
To verify the quality of the empirical estimates of equation (2) we perform the following experiment.
In figure 1 we plot the Jensen-Shannon (JS) divergence between the empirical joint distribution obtained from the test images and the two estimates: direct estimation of the empirical joint distribution
p(S, C|R) on the training set, and an approximate estimate made by assuming independence as in
4
beta=5.000000
beta=1.000000
0.25
p(R|w)
p(R|w)
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
0
1
2
3
4
5
6
7
8
9
1
10
2
3
4
5
6
7
8
9
10
classes R
classes R
Figure 3: The effect of ? on DITC clusters. For 20 words p (R|wt ) is plotted in dotted grey lines.
DITC is used to obtain ten portmanteau means p (R|Wj ) are plotted in different colors. On the
left is shown the final clustering for ? = 1.0. Note that none of the portmanteau means are especially discriminative for one particular class. On the right, however, for ? = 5.0 each portmanteau
concentrates on discriminating one class.
equation (2). Results are provided as a function of the number of training images for two large
datasets. A low JS-divergence means a better estimate of the true joint-cue distribution. The plotted lines show the curves for a color cue vocabulary of 100 words and a shape cue vocabulary of
5,000 words, resulting in a product vocabulary of 500,000 words. On both datasets we see that the
independence assumption actually leads to a better or equally good estimate of the joint distribution.
Increasing the number of training samples, or starting with smaller color and shape vocabularies
and hence reducing the number of parameters to estimate, will improve direct empirical estimates
of p(S, C). However, figure 1 shows that for typical vocabulary settings on large datasets the independence assumption results in equivalently good or better estimates of the joint distribution.
2.3
Cue weighting
Constructing the compact portmanteau vocabularies based on the independence assumption significantly reduces the number of parameters to estimate. Furthermore, as we will see in this section, it
allows us to control the relative contribution of color and shape cues in the final representation.
We introduce a weighting parameter ? ? [0, 1] in the estimate of p(C, S):
p? (sm , cn |R) ? p(sm |R)? p(cn |R)1??
(3)
where an ? close to zero results in a larger influence of the color words, and a ? close to one leads
to a vocabulary which focuses predominantly on shape.
To illustrate the influence of ? on the vocabulary construction, we show samples from portmanteau
words obtained on the Oxford Flower-102 dataset (see figure 4) in figure 2. The DITC algorithm is
applied to reduce the product vocabulary of 500,000 compound words to 100 portmanteau words.
For settings of ? ? {0.1, 0.5, 0.9} we show five of the hundred words. Each word is represented by
one hundred randomly sampled patches from the dataset which have been assigned to the word. The
effect of changing the ? can be clearly seen. For low ? the Portmanteau words exhibit homogeneity
of color but lack within-cluster shape consistency. On the other hand for high ? the words show
strong shape homogeneity such as low and high frequency lines and blobs, while color is more
uniformly distributed. For a setting of ? = 0.5 the clustering is more consistent in both color and
shape.
Additionally, another parameter ? is introduced:
p?,? (sm , cn |R) ? p(sm |R)? p(cn |R)1??
?
(4)
To illustrate the influence of ? consider the following experiment on synthetic data. We generate a
set of 100 words which have random discriminative power p (R|wt ) over L = 10 classes. In figure 3
5
Figure 4: Example images from the two datasets used in our experiments Top: images from four
categories of the Flower-102 dataset. Bottom: four example images from the Bird-200 dataset.
we show the p (R|wt ) for a subset of 20 words in grey, and p (R|Wj ) ?
P
p(wt )p(R|wt ) for
wt ?Wj
the ten portmanteau words in color. We observe that increasing the ? parameter directs DITC to
find clusters which are each highly discriminative for a single class, rather than being discriminative
over all classes. We found that higher ? values often lead to image representations which improve
classification results.
These weighting parameters are learned through cross validation on the training set. In practice we
found ? to change with the data set according to the importance of color and shape. The ? parameter
was found to to be constant at a value 5 for the two datasets evaluated in this paper. Both parameters
were found to significantly improve results on both datasets.
2.4
Image representation with portmanteau vocabularies
We summarize our approach to constructing portmanteau vocabularies for image representation.
We emphasize the fact that our approach is fundamentally about deriving compact multi-cue image
representations and, as such, can be used as a drop-in replacement in any bag-of-words pipeline.
Image representation by portmanteau vocabulary built from color and shape cues follows these steps:
1. Independent color and shape vocabularies are constructed by standard K-means clustering
over color and shape descriptors extracted from training images.
2. Empirical class-conditional word distributions p(S|R) and p(C|R) are computed from the
training set, the joint cue distribution P (S, C|R) is estimated assuming conditional independence as in equation (4).
3. The portmanteau vocabulary is computed with the DITC algorithm. The output of the
DITC is a list of indexes which, for each member of the compound vocabulary maps to one
of the J portmanteau words.
4. Using the index list output by DITC, the original image features are revisited and the index
corresponding the compound shape-color word at each feature is used to represent each
image as a histogram over the portmanteau vocabulary.
3
Experimental results
We follow the standard bag-of-words approach. We use a combination of interest-point detectors
along with a dense multi-scale grid detector. The SIFT descriptor [12] is used to construct a shape
vocabulary. For color we use the color name descriptor, which is computed by converting sRGB
values to color names according to [19] after which each patch is represented as a histogram over
the eleven color names. The shape and color vocabularies are constructed using the standard Kmeans algorithm. In all our experiments we use a shape vocabulary of 5000 words and a color
vocabulary of 100 words. Applying Laplace weighting was not found to influence the results and
6
therefore not used in the experiments. The classifier is a non-linear, multi-way, one-versus-all SVM
using the ?2 kernel [24]. Each test image is assigned the label of the classifier giving the highest
response and the final classification score is the mean recognition rate per category.
We performed several experiments to validate our approach to building multi-cue vocabularies by
comparing with other methods which are based on exactly the same initial SIFT and CN descriptors:
? Shape and Color only: a single vocabulary of 5000 SIFT words and one of 100 CN words.
? Early fusion: SIFT and CN are concatenated into single descriptor. The relative weight
of shape and color is optimized by cross-validation. Note that cross-validation on cue
weighting parameters for early fusion must be done over the entire BOW pipeline, from
vocabulary construction to classification. Vocabulary size is 5000.
? Direct empirical: DITC based on the empirical distribution of p(S, C|R) over a total of
500.000 compound words estimated on the training set.
? Independence assumption: where p(S, C|R) = p(S|R)p(C|R) is assumed. We also
show separate results with and without using ? and ?.
In all cases the color-shape visual vocabularies are compressed to 500 visual words and spatial pyramids are constructed for the final image representation as in [11]. All of the above approaches were
evaluated on two standard and challenging datasets: Oxford Flower-102 and Caltech-UCSD Bird200. The train-test splits are fixed for both datasets and are provided on their respective websites.2
3.1
Results on the Flower-102 and Bird-200 datasets
The Oxford Flower-102 dataset contains 8189 images of 102 different flower species. It is a challenging dataset due to significant scale and illumination changes (see figure 4). The results are
presented in table 1(a). We see that shape alone yields results superior to color. Early fusion is
reasonably good at 70.5%. This is however obtained through laborious cross validation to obtain
the optimal balance between CN and SIFT cues. Since our cue weighting is done after the initial
vocabulary and histogram construction, cross-validation is significantly faster than for early fusion.
The bottom three rows of table 1(a) give the results of our approach to image representation with
portmanteau vocabularies in a variety of configurations. The direct empirical estimation of the joint
shape-color distribution provides slightly better results than estimation based on the independence
assumption. However, weighting the two visual cues using the ? parameter described in equation (3)
in the independent estimation of p(s, c|class) improves the results significantly. In particular, the
gain of almost 7% obtained by adding ? is remarkable. The best recognition performance were
obtained for ? = 0.8 and ? = 5.
The Caltech-UCSD Bird-200 dataset contains 6033 images from 200 different bird species. This
dataset contains many bird species that closely resemble each other in terms of color and shape cues,
making the recognition task extremely difficult. Table 1(a) contains test results for our approach on
Bird-200 as well. Interestingly, on this dataset color outperforms shape alone and early fusion
yields only a small improvement over color. Results based on portmanteau vocabularies outperform
early fusion, and estimation based on the independence assumption provide better results than direct
empirical estimation. These results are further improved by the introduction of cue weighting with
a final score of 22.4% obtained with ? = 0.7 and ? = 5 outperforming all others.
3.2
Comparison with the state-of-the-art
Recently, an extensive performance evaluation of color descriptors was presented by van de Sande
et al. [18]. In this evaluation the OpponentSIFT and C-SIFT were reported to provide superior
performance on image categorization problems. We construct a visual vocabulary of 5000 visual
words for both OpponentSIFT and C-SIFT and apply the DITC algorithm to compress it to 500
visual words. As shown in table 1(b), Our approach provides significantly better results compared
to both OpponentSIFT and C-SIFT, possibly due to the fact neither supports cue weighting.
2
The Flower-102 dataset at http://www.robots.ox.ac.uk/vgg/research/flowers/ and the
Birds-200 set at http://www.vision.caltech.edu/visipedia/CUB-200.html
7
Method
Shape only
Color only
Early Fusion
Direct empirical
Independent
Independent + ?
Independent + ? + ?
Flower-102
60.7
48.5
70.5
64.6
63.5
66.4
73.3
(a)
Bird-200
12.9
16.8
17.0
18.9
19.8
21.6
22.4
Method
OpponentSIFT
C-SIFT
MKL [13]
MKL [3]
Random Forest [23]
Saliency [9]
Our Approach
Bird-200
14.0
13.9
?
19.0
19.2
?
22.4
(b)
Flower-102
69.2
65.9
72.8
?
?
71.0
73.3
Table 1: Comparative evaluation of our approach. (a) Classification score on Flower-102 and Bird200 datasets for individual features, early fusion and several configurations of our approach. (b)
Comparison of our approach to the state-of-the-art on the Bird-200 and Flower-102 datasets.
In recent years, combining multiple cues using Multiple Kernel Learning (MKL) techniques has
received a lot of attention. These approaches combine multiple cues and multiple kernels and apply
per-class cue weighting. Table 1(b) includes two recent MKL techniques that report state-of-the-art
performance. The technique described in [3] is based on geometric blur, grayscale SIFT, color SIFT
and full image color histograms, while the approach in [13] also employs HSV, SIFT int, SIFT bd,
and HOG descriptors in the MKL framework of [21]. Despite the simplicity of our approach, which
is based on only two cues and a single kernel, it outperforms these complex multi-cue learning
techniques. Also note that both MKL approaches are based on learning class-specific weighting for
multiple cues. This is especially cumbersome when there exist several hundred object categories in
a dataset (e.g. the Bird-200 dataset contains 200 bird categories). In contrast to these approaches,
we learn a global, class-independent cue weighting parameters to balance color and shape cues.
On the Flower-102 dataset, our final classification score of 73.3% is comparable to the state-of-theart recognition performance [13, 9, 8]3 obtained on this dataset. It should be noted that Nilsback
and Zisserman [13] obtain a classification performance of 72.8% using segmented images and a
combination of four different visual cues in a multiple kernel learning framework. Our performance,
however, is obtained on unsegmented images using only color and shape cues. On the Bird-200
dataset, our approach significantly outperforms state-of-the-art methods [23, 3, 22].
4
Conclusions
In this paper we propose a new method to construct multi-cue, visual portmanteau vocabularies
that combine color and shape cues. When constructing a multi-cue vocabulary two properties are
especially desirable: cue binding and cue weighting. Starting from multi-cue product vocabularies
we compress this representation to form discriminative compound terms, or portmanteaux, used in
the final image representation. Experiments demonstrate that assuming independence of visual cues
given the categories provides a robust estimation of joint-cue distributions compared to direct empirical estimation. Assuming independence also has the advantage of both reducing the complexity
of the representation by two orders of magnitude and allowing flexible cue weighting. Our final image representation is compact, maintains the cue binding property, admits cue weighting and yields
state-of-the-art performance on the image categorization problem.
We tested our approach on two datasets, each with more than one hundred object categories. Results
demonstrate the superiority of our approach over existing ones combining color and shape cues. We
obtain a gain of 2.8% and 5.4% over the early fusion approach. Our approach also outperforms
methods based on multiple cues and MKL with per-class parameter learning. This leaves open the
possibility of using our approach to multi-cue image representation within an MKL framework.
Acknowledgments: This work is supported by the EU project ERG-TS-VICI-224737; by the Spanish Research Program Consolider-Ingenio 2010: MIPRCV (CSD200700018); by the Tuscan Regional project MNEMOSYNE (POR-FSE 2007-2013, A.IV-OB.2); and by the Spanish projects
TIN2009-14173, TIN2010-21771-C02-1. Joost van de Weijer acknowledges the support of a Ramon y Cajal fellowship.
3
From correspondence with the authors of [8] we learned that the results reported in their paper are erroneous and they do not obtain results better than [13].
8
References
[1] Francis Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In NIPS, 2008.
[2] A. Bosch, A. Zisserman, and X. Munoz. Scene classification via plsa. In ECCV, 2006.
[3] Steve Branson, Catherine Wah, Florian Schroff, Boris Babenko, Peter Welinder, Pietro Perona, and Serge
Belongie. Visual recognition with humans in the loop. In ECCV, 2010.
[4] G. Csurka, C. Bray, C. Dance, and L. Fan. Visual categorization with bags of keypoints. In Workshop on
Statistical Learning in Computer Vision, ECCV, 2004.
[5] Inderjit Dhillon, Subramanyam Mallela, and Rahul Kumar. A divisive information-theoretic feature clustering algorithm for text classification. Journal of Machine Learning Research (JMLR), 3:1265?1287,
2003.
[6] Noha M. Elfiky, Fahad Shahbaz Khan, Joost van de Weijer, and Jordi Gonzalez. Discriminative compact
pyramids for object and scene recognition. Pattern Recgnition, 2011.
[7] Brian Fulkerson, Andrea Vedaldi, and Stefano Soatto. Localizing objects with smart dictionaries. In
ECCV, 2008.
[8] Satoshi Ito and Susumu Kubota. Object classification using hetrogeneous co-occurrence features. In
ECCV, 2010.
[9] Christopher Kanan and Garrison Cottrell. Robust classification of objects, faces, and flowers using natural
image statistics. In CVPR, 2010.
[10] Fahad Shahbaz Khan, Joost van de Weijer, and Maria Vanrell. Top-down color attention for object recognition. In ICCV, 2009.
[11] Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. Beyond bags of features: Spatial pyramid matching
for recognizing natural scene categories. In CVPR, 2006.
[12] D. G. Lowe. Distinctive image features from scale-invariant points. IJCV, 60(2):91?110, 2004.
[13] M-E Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In
ICVGIP, 2008.
[14] Alain Rakotomamonjy, Francis Bach, Stephane Canu, and Yves Grandvalet. More efficiency in multiple
kernel learning. In ICML, 2007.
[15] J. Sivic, B. Russell, A. Efros, A. Zisserman, and W.Freeman. Discovering object categories in image
collections. In ICCV, 2005.
[16] Noam Slonim and Naftali Tishby. Agglomerative information bottleneck. In NIPS, 1999.
[17] Anne Treisman. Feature Binding, Attention and Object Perception. Philosophical Transactions: Biological Sciences, 353(1373):1295?1306, 1998.
[18] Koen E. A. van de Sande, Theo Gevers, and Cees G. M. Snoek. Evaluating color descriptors for object
and scene recognition. PAMI, 32(9):1582?1596, 2010.
[19] J. van de Weijer, C. Schmid, Jakob J. Verbeek, and D. Larlus. Learning color names for real-world
applications. IEEE Transaction in Image Processing (TIP), 18(7):1512?1524, 2009.
[20] Manik Varma and Bodla Rakesh Babu. More generality in efficient multiple kernel learning. In ICML,
2009.
[21] Manik Varma and Debajyoti Ray. Learning the discriminative power-invariance trade-off. In ICCV, 2007.
[22] Jinjun Wang, Jianchao Yang, Kai Yu, Fengjun Lv, Thomas Huang, and Yihong Gong.
constrained linear coding for image classification. In CVPR, 2010.
Locality-
[23] Bangpeng Yao, Aditya Khosla, and Li Fei-Fei. Combining randomization and discrimination for finegrained image categorization. In CVPR, 2011.
[24] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of
texture and object catergories: A comprehensive study. IJCV, 73(2):213?218, 2007.
9
| 4481 |@word middle:1 compression:5 consolider:1 plsa:1 open:1 grey:2 reduction:1 initial:2 configuration:3 contains:8 score:4 interestingly:1 outperforms:4 existing:1 comparing:1 babenko:1 anne:1 si:1 must:2 bd:1 cottrell:1 blur:1 shape:54 eleven:1 plot:2 designed:1 jenson:1 drop:3 discrimination:1 alone:3 cue:78 pursued:2 website:1 leaf:1 discovering:1 accordingly:1 provides:3 quantized:1 revisited:1 location:2 lexicon:1 hsv:1 zhang:1 five:4 along:1 c2:1 direct:10 constructed:4 beta:2 ijcv:2 combine:5 ray:1 introduce:2 snoek:1 andrea:1 themselves:1 multi:12 freeman:1 automatically:1 considering:3 increasing:2 spain:1 estimating:3 campus:1 underlying:1 notation:1 medium:1 provided:2 project:3 minimizes:1 impractical:2 every:3 exactly:2 classifier:6 uk:1 control:1 superiority:1 before:1 local:7 slonim:1 despite:1 oxford:5 marszalek:1 pami:1 bird:15 challenging:2 co:3 branson:1 limited:1 statistically:1 acknowledgment:1 practice:1 empirical:15 significantly:7 vedaldi:1 matching:1 word:70 regular:1 cannot:1 close:2 impossible:1 influence:4 applying:1 www:2 koen:1 map:1 demonstrated:1 center:1 primitive:2 straightforward:1 starting:3 independently:4 attention:3 simplicity:1 deriving:1 erg:1 varma:2 fulkerson:1 handle:1 traditionally:1 laplace:1 construction:5 homogeneous:2 element:1 expensive:1 recognition:8 sparsely:1 bottom:3 wang:1 capture:1 thousand:3 region:1 ensures:1 wj:7 eu:1 russell:1 highest:1 trade:1 complexity:2 moderately:1 bird200:2 solving:1 smart:1 distinctive:1 efficiency:1 easily:2 joost:4 joint:22 represented:5 train:1 describe:5 detected:1 whose:1 jean:1 larger:2 solve:1 cvpr:4 kai:1 tested:1 compressed:1 statistic:2 final:17 seemingly:1 inadequately:1 advantage:2 blob:1 propose:3 coming:1 product:11 combining:4 loop:1 bow:6 validate:1 cluster:18 r1:1 categorization:6 comparative:1 boris:1 object:14 depending:1 andrew:1 derive:1 illustrate:2 ac:1 bosch:1 measured:2 gong:1 received:1 borrowed:1 strong:2 resemble:1 implies:1 concentrate:1 guided:1 drawback:2 correct:1 closely:1 stephane:1 human:2 clustered:1 randomization:1 brian:1 biological:1 exploring:1 around:1 considered:2 efros:1 dictionary:1 early:12 cub:1 estimation:13 schroff:1 bag:7 label:3 combinatorially:1 create:2 successfully:1 weighted:1 clearly:1 rather:2 focus:1 ponce:1 maria:2 directs:1 improvement:1 contrast:1 dependent:2 typically:1 entire:1 perona:1 going:1 classification:15 flexible:2 html:1 development:1 ingenio:1 spatial:3 autonoma:1 integration:1 mutual:4 art:5 field:1 construct:5 equal:4 weijer:4 cordelia:1 sampling:1 represents:2 yu:1 icml:2 theart:2 others:1 report:1 fundamentally:1 few:1 employ:1 randomly:1 cajal:1 recognize:1 divergence:5 individual:3 hamper:1 homogeneity:2 comprehensive:1 replacement:1 maintain:1 attempt:1 interest:1 highly:2 investigate:1 possibility:1 evaluation:3 laborious:1 predefined:1 respective:1 iv:1 plotted:3 inconvenient:1 increased:1 modeling:1 disadvantage:1 localizing:1 assignment:1 rakotomamonjy:1 subset:1 hundred:5 recognizing:1 successful:1 welinder:1 tishby:1 universitat:1 reported:2 varies:1 considerably:1 synthetic:1 discriminating:1 off:1 tip:1 treisman:1 yao:1 w1:2 jinjun:1 containing:1 huang:1 possibly:1 por:1 resort:1 li:1 de:9 coding:1 includes:1 int:1 babu:1 manik:2 performed:3 csurka:1 lot:1 lowe:1 francis:2 maintains:1 complicated:1 gevers:1 florence:1 icvgip:1 contribution:1 minimize:1 yves:1 descriptor:8 yield:4 identify:1 saliency:1 serge:1 satoshi:1 accurately:1 none:1 detector:2 reach:1 cumbersome:2 frequency:1 jordi:1 sampled:4 gain:2 dataset:18 finegrained:1 color:65 knowledge:1 dimensionality:4 subsection:1 cj:1 improves:1 actually:1 steve:1 higher:1 follow:1 response:1 improved:1 zisserman:4 rahul:1 evaluated:2 box:1 done:2 ox:1 furthermore:3 generality:1 stage:1 overfit:1 hand:1 christopher:1 unsegmented:1 lack:1 mkl:9 quality:2 building:2 effect:3 name:4 concept:3 contain:2 true:3 counterpart:1 aib:1 verify:1 assigned:3 hence:1 alternating:1 soatto:1 leibler:1 dhillon:2 semantic:1 spanish:2 portmanteau:30 naftali:1 noted:1 theoretic:4 demonstrate:3 stefano:1 image:57 meaning:3 lazebnik:2 novel:1 recently:2 predominantly:1 superior:2 empirically:2 rl:1 million:2 extend:1 occurred:1 discussed:1 refer:1 significant:1 munoz:1 grid:2 consistency:2 similarly:1 canu:1 centre:1 language:2 robot:1 bellaterra:1 j:2 closest:1 own:1 recent:5 italy:1 compound:17 catherine:1 sande:2 outperforming:1 success:2 uab:1 caltech:4 seen:2 additional:1 florian:1 employed:1 converting:1 mallela:1 multiple:13 full:1 desirable:1 reduces:1 keypoints:1 segmented:1 faster:1 adapt:1 cross:6 bach:2 equally:1 tin2009:1 ensuring:2 verbeek:1 vision:2 nilsback:2 histogram:8 represent:3 kernel:10 pyramid:4 c1:1 preserved:1 addition:1 fellowship:1 w2:2 regional:1 posse:1 tend:1 sent:1 member:2 effectiveness:1 call:1 yang:1 split:1 automated:1 variety:1 independence:17 reduce:4 cn:11 vgg:1 yihong:1 bottleneck:1 jianchao:1 render:1 peter:1 ten:3 svms:1 category:10 generate:1 http:2 outperform:2 exist:2 notice:1 dotted:1 neuroscience:1 estimated:4 per:4 group:1 four:3 kanan:1 susumu:1 changing:1 neither:1 graph:1 pietro:1 year:3 svetlana:1 connotation:1 almost:1 c02:1 patch:4 gonzalez:1 ob:1 comparable:1 correspondence:1 fan:1 bray:1 precisely:1 fei:2 scene:4 aspect:1 extremely:1 kumar:1 attempting:1 relatively:1 kubota:1 department:1 according:2 combination:9 remain:1 smaller:1 slightly:1 larlus:1 making:2 s1:1 constrained:1 iccv:3 invariant:1 pipeline:2 equation:6 describing:1 discus:1 needed:1 know:1 apply:4 observe:1 hierarchical:1 appearing:1 occurrence:2 bangpeng:1 original:5 compress:3 top:3 clustering:7 include:1 ensure:1 thomas:1 giving:1 restrictive:1 concatenated:1 build:2 especially:4 objective:1 blend:1 said:1 exhibit:1 separate:2 vanrell:1 agglomerative:1 reason:1 assuming:5 opponentsift:4 index:3 insufficient:1 balance:2 equivalently:2 difficult:3 hog:1 noam:1 reconsider:1 ski:1 perform:1 allowing:1 observation:1 datasets:16 sm:8 t:1 communication:1 ucsd:3 jakob:1 overcoming:1 introduced:1 kl:3 khan:3 optimized:1 extensive:1 wah:1 philosophical:1 sivic:1 learned:5 barcelona:2 nip:2 vici:1 beyond:1 flower:18 pattern:1 perception:1 appeared:1 summarize:1 program:1 built:1 ramon:1 power:4 event:2 difficulty:1 natural:4 improve:3 acknowledges:1 coupled:2 schmid:3 text:1 prior:1 literature:1 geometric:1 relative:2 loss:2 tin2010:1 versus:1 remarkable:1 lv:1 validation:6 integrate:1 consistent:1 grandvalet:1 classifying:1 row:4 prone:1 eccv:5 supported:1 infeasible:1 alain:1 theo:1 allow:2 face:1 van:7 distributed:1 curve:1 vocabulary:81 evaluating:1 world:1 author:1 collection:2 made:1 transaction:2 debajyoti:1 approximate:1 compact:9 emphasize:3 kullback:1 global:1 overfitting:1 assumed:1 belongie:1 discriminative:14 grayscale:1 khosla:1 table:6 additionally:1 learn:1 reasonably:1 robust:6 fse:1 forest:1 complex:4 constructing:3 main:1 dense:2 s2:1 join:1 garrison:1 communicates:1 weighting:25 late:2 jmlr:1 ito:1 down:1 erroneous:1 specific:2 sift:13 jensen:1 list:3 r2:1 svm:1 admits:1 fusion:14 essential:1 workshop:1 adding:1 importance:2 texture:2 magnitude:3 illumination:1 cartesian:2 locality:1 visual:23 aditya:1 inderjit:1 binding:16 extracted:1 conditional:7 sized:1 visio:1 kmeans:1 feasible:1 change:2 typical:4 reducing:2 uniformly:1 wt:11 called:1 total:2 specie:3 invariance:1 divisive:2 experimental:1 shannon:2 meaningful:1 rakesh:1 select:1 support:3 relevance:1 dance:1 |
3,846 | 4,482 | Learning Auto-regressive Models from Sequence and
Non-sequence Data
Jeff Schneider
Robotics Institute
Carnegie Mellon University
[email protected]
Tzu-Kuo Huang
Machine Learning Department
Carnegie Mellon University
[email protected]
Abstract
Vector Auto-regressive models (VAR) are useful tools for analyzing time series
data. In quite a few modern time series modelling tasks, the collection of reliable
time series turns out to be a major challenge, either due to the slow progression of
the dynamic process of interest, or inaccessibility of repetitive measurements of
the same dynamic process over time. In those situations, however, we observe that
it is often easier to collect a large amount of non-sequence samples, or snapshots
of the dynamic process of interest. In this work, we assume a small amount of time
series data are available, and propose methods to incorporate non-sequence data
into penalized least-square estimation of VAR models. We consider non-sequence
data as samples drawn from the stationary distribution of the underlying VAR
model, and devise a novel penalization scheme based on the Lyapunov equation
concerning the covariance of the stationary distribution. Experiments on synthetic
and video data demonstrate the effectiveness of the proposed methods.
1
Introduction
Vector Auto-regressive models (VAR) are an important class of models for analyzing multivariate
time series data. They have proven to be very useful in capturing and forecasting the dynamic
properties of time series in a number of domains, such as finance and economics [18, 13]. Recently,
researchers in computational biology applied VAR models in the analysis of genomic time series
[12], and found interesting results that were unknown previously.
In quite a few scientific modeling tasks, a major difficulty turns out to be the collection of reliable
time series data. In some situations, the dynamic process of interest may evolve slowly over time,
such as the progression of Alzheimer?s or Parkinson?s diseases, and researchers may need to spend
months or even years tracking the dynamic process to obtain enough time series data for analysis.
In other situations, the dynamic process of interest may not be able to undergo repetitive measurements, so researchers have to measure multiple instances of the same process while maintaining
synchronization among these instances. One such example is gene expression time series. In their
study, [19] measured expression profiles of yeast genes along consecutive metabolic cycles. Due to
the destructive nature of the measurement technique, they collected expression data from multiple
yeast cells. In order to obtain reliable time series data, they spent a lot of effort developing a stable
environment to synchronize the cells during the metabolic cycles. Yet, they point out in their discussion that such a synchronization scheme may not work for other species, e.g., certain bacteria and
fungi, as effectively as for yeast.
While obtaining reliable time series can be difficult, we observe that it is often easier to collect nonsequence samples, or snapshots of the dynamic process of interest1 . For example, a scientist studying
1
In several disciplines, such as social and medical sciences, the former is usually referred to as a longitudinal study, while the latter is similar to what is called a cross-sectional study.
1
Alzheimer?s or Parkinson?s can collect samples from his or her current pool of patients, each of
whom may be in a different stage of the disease. Or in gene expression analysis, current technology
already enables large-scale collection of static gene expression data. Previously [6] investigated
ways to extract dynamics from such static gene expression data, and more recently [8, 9] proposed
methods for learning first-order dynamic models from general non-sequence data. However, most
of these efforts suffer from a fundamental limitation: due to lack of temporal information, multiple
dynamic models may fit the data equally well and hence certain characteristics of dynamics, such as
the step size of a discrete-time model and the overall temporal direction, become non-identifiable.
In this work, we aim to combine these two types of data to improve learning of dynamic models. We
assume that a small amount of sequence samples and a large amount of non-sequence samples are
available. Our aim is to rely on the few sequence samples to obtain a rough estimate of the model,
while refining this rough estimate using the non-sequence samples. We consider the following firstorder p-dimensional vector auto-regressive model:
xt+1 = xt A + ?t+1 ,
t
1?p
p?p
(1)
t
where x ? R
is the state vector at time t, A ? R
is the transition matrix, and ? is a whitenoise process with a time-invariant variance ? 2 I. Given a sequence sample, a common estimation
method for A is the least-square estimator, whose properties have been studied extensively (see e.g.,
[7]). We assume that the process (1) is stable, i.e., the eigenvalues of A have modulus less than one.
As a result, the process (1) has a stationary distribution, whose covariance Q is determined by the
following discrete-time Lyapunov equation:
A? QA + ? 2 I = Q.
(2)
Linear quadratic Lyapunov theory (see e.g., [1]) gives that Q is uniquely determined if and only if
?i (A)?j (A) 6= 1 for 1 ? i, j ? p, where ?i (A) is the i-th eigenvalue of A. If the noise process
?t follows a normal distribution, the stationary distribution also follows a normal distribution, with
covariance Q determined as above. Since our goal is to estimate A, a more relevant perspective is
viewing (2) as a system of constraints on A. What motivates this work is that the estimation of Q
requires only samples drawn from the stationary distribution rather than sequence data. However,
even if we have the true Q and ? 2 , we still cannot uniquely determine A because (2) is an underdetermined system2 of A. We thus rely on the few sequence samples to resolve the ambiguity.
We describe the proposed methods in Section 2, and demonstrate their performance through experiments on synthetic and video data in Section 3. Our finding in short is that when the amount of
sequence data is small and our VAR model assumption is valid, the proposed methods of incorporating non-sequence data into estimation significantly improve over standard methods, which use only
the sequence data. We conclude this work and discuss future directions in Section 4.
2
Proposed Methods
Let {xi }Ti=1 be a sequence of observations generated by the process (1). The standard least-square
estimator for the transition matrix A is the solution to the following minimization problem:
min
A
kY ? XAk2F ,
(3)
where Y ? := [(x2 )? (x3 )? ? ? ? (xT )? ], X ? := [(x1 )? (x2 )? ? ? ? (xT ?1 )? ], and k ? kF denotes
the matrix Frobenius norm. When p > T , which is often the case in modern time series modeling
tasks, the least square problem (3) has multiple solutions all achieving zero squared error, and the
resulting estimator overfitts the data. A common remedy is adding a penalty term on A to (3) and
minimizing the resulting regularized sum of squared
errors. Usual penalty terms include the ridge
P
penalty kAk2F and the sparse penalty kAk1 := i,j |Aij |.
Now suppose we also have a set of non-sequence observations {zi }ni=1 drawn independently from
the stationary distribution of (1). Note that we use superscripts for time indices and subscripts for
data indices. As described in Section 1, the size n of the non-sequence sample can usually be much
larger than the size T of the sequence data. To incorporate the non-sequence observations into the
2
If we further require A to be symmetric, (2) would be a simplified Continuous-time Algebraic Riccati
Equation, which has a unique solution under some conditions (c.f. [1]).
2
(a) SSE and Ridge
(b) Lyap
(c) SSE+Ridge+ 12 Lyap
Figure 1: Level sets of different functions in a bivariate AR example
b of the stationary distribution from
estimation procedure, we first obtain a covariance estimate Q
the non-sequence sample, and then turn the Lyapunov equation (2) into a regularization term on A.
More precisely, in addition to the usual ridge or sparse penalty terms, we also consider the following
regularization:
b + ? 2 I ? Qk
b 2,
kA? QA
(4)
F
which we refer to as the Lyapunov penalty. To compare (4) with the ridge penalty and the sparse
penalty, we consider (3) as a multiple-response regression problem and view the i-th column of A as
the regression coefficient vector for the i-th output dimension. From this viewpoint, we immediately
see that both the ridge and the sparse penalizations treat the p regression problems as unrelated. On
the contrary, the Lyapunov penalty incorporates relations between pairs of columns of A by using a
b In other words, although the non-sequence sample does not provide direct
covariance estimate Q.
information about the individual regression problems, it does reveal how the regression problems
are related to one another. To illustrate how the Lyapunov penalty may help to improve learning, we
give an example in Figure 1. The true transition matrix is
?0.4280 0.5723
(5)
A =
?1.0428 ?0.7144
and ?t ? N (0, I). We generate a sequence of 4 points, draw a non-sequence sample of 20 points
b We fix the
independently from the stationary distribution and obtain the sample covariance Q.
second column of A but vary the first, and plot in Figure 1(a) the resulting level sets of the sum of
squared errors on the sequence (SSE) and the ridge penalty (Ridge), and in Figure 1(b) the level
sets of the Lyapunov penalty (Lyap). We also give coordinates of the true [A11 A21 ]? , the minima
of SSE, Ridge, and Lyap, respectively. To see the behavior of the ridge regression, we trace out
a path of the ridge regression solution by varying the penalization parameter, as indicated by the
red-to-black curve in Figure 1(a). This path is pretty far from the true model, due to insufficient
sequence data. For the Lyapunov penalty, we observe that it has two local minima, one of which is
very close to the true model, while the other, also the global minimum, is very far. Thus, neither
ridge regression nor the Lyapunov penalty can be used on its own to estimate the true model well.
But as shown in Figure 1(c), the combined objective, SSE+Ridge+ 12 Lyap, has its global minimum
very close to the true model. This demonstrates how the ridge regression and the Lyapunov penalty
may complement each other: the former by itself gives an inaccurate estimation of the true model,
but is just enough to identify a good model from the many candidate local minima provided by the
latter.
In the following we describe our proposed methods for incorporating the Lyapunov penalty (4) into
ridge and sparse least-square estimation. We also discuss robust estimation for the covariance Q.
2.1
Ridge and Lyapunov penalty
Here we estimate A by solving the following problem:
1
?1
?2
b + ? 2 I ? Qk
b 2F ,
min
kY ? XAk2F + kAk2F + kA? QA
A
2
2
4
3
(6)
b is a covariance estimate obtained from the non-sequence sample. We treat ?1 , ?2 and ? 2
where Q
as hyperparameters and determine their values on a validation set. Given these hyperparameters, we
solve (6) by gradient descent with back-tracking line search for the step size. The gradient of the
objective function is given by
?b
b
b
? X ? Y + X ? XA + ?1 A + ?2 QA(A
QA + ? 2 I ? Q).
(7)
As mentioned before, (6) is a non-convex problem and thus requires good initialization. We use the
following two initial estimates of A:
blsq := (X ? X)? X ? Y and A
bridge := (X ? X + ?1 I)?1 X ? Y,
A
(8)
blsq the minimum-norm
where (?)? denotes the Moore-Penrose pseudo inverse of a matrix, making A
solution to the least square problem (3). We run the gradient descent algorithm with these two initial
estimates, and choose the estimated A that gives a smaller objective.
2.2
Sparse and Lyapunov penalty
Sparse learning for vector auto-regressive models has become a useful tool in many modern time
series modeling tasks, where the number p of states in the system is usually larger than the length
T of the time series. For example, an important problem in computational biology is to understand
the progression of certain biological processes from some measurements, such as temporal gene
expression data.
Using an idea similar to (6), we estimate A by
?2
1
b + ? 2 I ? Qk
b 2F ,
kY ? XAk2F + kA? QA
min
A
2
4
(9)
s.t. kAk1 ? ?1 .
Instead of adding a sparse penalty on A to the objective function, we impose a constraint on the
?1 norm of A. Both the penalty and the constraint formulations have been considered in the sparse
learning literature, and shown to be equivalent in the case of a convex objective. Here we choose
the constraint formulation because it can be solved by a simple projected gradient descent method.
On the contrary, the penalty formulation leads to a non-smooth and non-convex optimization problem, which is difficult to solve with standard methods for sparse learning. In particular, the softthresholding-based coordinate descent method for LASSO does not apply due to the Lyapunov
regularization term. Moreover, most of the common methods for non-smooth optimization, such
as bundle methods, solve convex problems and need non-trivial modification in order to handle
non-convex problems [14].
Let J(A) denote the objective function in (9) and A(k) denote the intermediate solution at the k-th
iteration. Our projected gradient method updates A(k) to A(k+1) by the following rule:
A(k+1) ? ?(A(k) ? ? (k) ?J(A(k) )),
(k)
(k)
(10)
(k)
where ? > 0 denotes a proper step size, ?J(A ) denotes the gradient of J(?) at A , and ?(?)
denotes the projection onto the feasible region kAk1 ? ?1 . More precisely, for any p-by-p real
matrix V we define
(11)
?(V ) := arg min kA ? V k2F .
kAk1 ??1
To compute the projection, we use the efficient ?1 projection technique given in Figure 2 of [5],
whose expected running time is linear in the size of V .
For choosing a proper step size ? (k) , we consider the simple and effective Armijo rule along the
projection arc described in [2]. This procedure is given in Algorithm 1, and the main idea is to
ensure a sufficient decrease in the objective value per iteration (13). [2] proved that there always
exists ? (k) = ? rk > 0 satisfying (13), and every limit point of {A(k) }?
k=0 is a stationary point of
(9). In our experiments we set c = 0.01 and ? = 0.1, both of which are typical values used in
gradient descent. As in the previous section, we need good initializations for the projected gradient
descent method. Here we use these two initial estimates:
blsq k2F and A
bsp := arg min 1 kY ? XAk2F ,
blsq? := arg min kA ? A
A
(12)
kAk??1 2
kAk??1
blsq is defined in (8), and then choose the one that leads to a smaller objective value.
where A
4
Algorithm 1: Armijo?s rule along the projection arc
Input : A(k) , ?J(A(k) ), 0 < ? < 1, 0 < c < 1.
Output: A(k+1)
1 Find ? (k) = max{? rk |rk ? {0, 1, . . .}} such that A(k+1) := ?(A(k) ? ? (k) ?J(A(k) )) satisfies
J(A(k+1) ) ? J(A(k) ) ? c trace ?J(A(k) )? (A(k+1) ? A(k) )
2.3
(13)
Robust estimation of covariance matrices
To obtain a good estimator for A using the proposed methods, we need a good estimator for the
covariance of the stationary distribution of (1). Given an independent sample {zi }ni=1 drawn from
the stationary distribution, the sample covariance is defined as
n
1 X
?)? (zi ? z
?),
S :=
(zi ? z
n ? 1 i=1
? :=
where z
Pn
i=1
n
zi
.
(14)
Although unbiased, the sample covariance is known to be vulnerable to outliers, and ill-conditioned
when the number of sample points n is smaller than the dimension p. Both issues arise in many
real world problems, and the latter is particularly common in gene expression analysis. Therefore,
researchers in many fields, such as statistics [17, 20, 11], finance [10], signal processing [3, 4], and
recently computational biology [15], have investigated robust estimators of covariances. Most of
these results originate from the idea of shrinkage estimators, which shrink the covariance matrix
towards some target covariance with a simple structure, such as a diagonal matrix. It has been
shown in, e.g., [17, 10] that shrinking the sample covariance can achieve a smaller mean-squared
error (MSE). More specifically, [10] considers the following linear shrinkage:
b = (1 ? ?)S + ?F
Q
(15)
for 0 < ? < 1 and some target covariance F , and derive a formula for the optimal ? that minimizes
the mean-squared error:
b ? Qk2F ),
?? := arg min E(kQ
(16)
0???1
which involves unknown quantities such as true covariances of S. [15] proposed to estimate ?? by
replacing all the population quantities appearing in ?? by their unbiased empirical estimates, and
derived the resulting estimator ?
b? for several types of target F . For the experiments in this paper we
use the estimator proposed in [15] with the following F :
Sij , if i = j,
Fij =
1 ? i, j ? p.
(17)
0
otherwise,
b (Table 1 in [15]) below:
Denoting the sample correlation matrix as R, we give the final estimator Q
(
if i = j,
if i = j,
bij := 1,
b ij := Sij ,p
R
(18)
Q
bij Sii Sjj otherwise,
Rij min(1, max(0, 1 ? ?
b? )) otherwise,
R
Pn
P
P
n
c
?ij )2
k=1 (wkij ? w
i6=j (n?1)3
i6=j Var(Rij )
?
P
P
?
b :=
=
,
(19)
2
2
i6=j Rij
i6=j Rij
where
wkij := (?
zk )i (?
zk )j ,
w
?ij :=
and {?
zi }ni=1 are standardized non-sequence samples.
5
Pn
k=1
n
wkij
,
(20)
(a)
(b)
(c)
(d) Eigenvalues in modulus
Figure 2: Testing performances and eigenvalues in modulus for the dense model
3
Experiments
To evaluate the proposed methods, we conduct experiments on synthetic and video data. In both sets
b
of experiments we use the following two performance measures for a learnt model A:
T ?1
b 2
1 X kxt+1 ? xt Ak
.
Normalized error:
T ? 1 t=1 kxt+1 ? xt k2
T ?1
b ? xt )
1 X (xt+1 ? xt )? (xt A
Cosine score:
.
b ? xt k
T ? 1 t=1 kxt+1 ? xt kkxt A
b would perform under these two measures, we point
To give an idea of how a good estimate A
? t+1 = xt leads to a normalized error of 1, and a random-walk
out that a constant prediction x
? t+1 = xt + ?t+1 , ?t+1 being a white-noise process, results in a nearly-zero cosine
prediction x
b should
score. Thus, when the true model is more than a simple random walk, a good estimate A
achieve a normalized error much smaller than 1 and a cosine score way above 0. We also note that
the cosine score is upper-bounded by 1. In experiments on synthetic data we have the true transition
b ? AkF /kAkF .
matrix A, so we consider a third criterion, the matrix error: kA
In all our experiments, we have a training sequence, a testing sequence, and a non-sequence sample.
To choose the hyper-parameters ?1 , ?2 and ? 2 , we split the training sequence into two halves and
use the second half as the validation sequence. Once we find the best hyper-parameters according to
the validation performance, we train a model on the full training sequence and predict on the testing
sequence. For ?1 and ?2 , we adopt the usual grid-search scheme with a suitable range of values.
b ? ? 2 I should be positive semidefinite, and thus search the set
For ? 2 , we observe that (2) implies Q
j
b
{0.9 mini ?i (Q) | 1 ? j ? 3}. In most of our experiments, we find that the proposed methods are
much less sensitive to ? 2 than to ?1 and ?2 .
3.1
Synthetic Data
We consider the following two VAR models with a Gaussian white noise process ?t ? N (0, I).
Dense Model:
Sparse Model:
0.95M
, Mij ? N (0, 1), 1 ? i, j ? 200.
max(|?i (M )|)
0.95(M ? B)
, Mij ? N (0, 1), Bij ? Bern (1/8), 1 ? i, j ? 200,
A=
max(|?i (M ? B)|)
A=
where Bern(h) is the Bernoulli distribution with success probability h, and ? denotes the entrywise
product of two matrices. By setting h = 1/8, we make the sparse transition matrix A have roughly
40000/8 = 5000 non-zero entries. Both models are stable, and the stationary distribution for each
model is a zero-mean Gaussian. We obtain the covariance Q of each stationary distribution by
solving the Lyapunov equation (2). For a single experiment, we generate a training sequence and a
testing sequence, both initialized from the stationary distribution, and draw a non-sequence sample
independently from the stationary distribution. We set the length of the testing sequence to be
6
(a)
(b)
(c)
(d) Eigenvalues in modulus
Figure 3: Testing performances and eigenvalues in modulus for the sparse model
800, and vary the training sequence length T and the non-sequence sample size n: for the dense
model, T ? {50, 100, 150, 200, 300, 400, 600, 800} and n ? {50, 400, 1600}; for the sparse model,
T ? {25, 75, 150, 400} and n ? {50, 400, 1600}. Under each combination of T and n, we compare
the proposed Lyapunov penalization method with the baseline approach of penalized least square,
which uses only the sequence data. To investigate the limit of the proposed methods, we also use the
true Q for the Lyapunov penalization. We run 10 such experiments for the dense model and 5 for the
sparse model, and report the overall performances of both the proposed and the baseline methods.
3.1.1
Experimental results for the dense model
We give boxplots of the three performance measures in the 10 experiments in Figures 2(a) to 2(c).
The ridge regression approach and the proposed Lyapunov penalization method (6) are abbreviated
as Ridge and Lyap, respectively. For normalized error and cosine score, we also report the performance of the true A on testing sequences.
We observe that Lyap improves over Ridge more significantly when the training sequence length
T is small (? 200) and the non-sequence sample size n is large (? 400). When T is large, Ridge
already performs quite well and Lyap does not improve the performance much. But with the true
stationary covariance Q, Lyap outperforms Ridge significantly for all T . When n is small, the
b is far from the true Q and the Lyapunov penalty does not provide useful
covariance estimate Q
information about A. In this case, the value of ?2 determined by the validation performance is
usually quite small (0.5 or 1) compared to ?1 (256), so the two methods perform similarly on testing
sequences. We note that if instead of the robust covariance estimate in (18) and (19) we use the
sample covariance, the performance of Lyap can be marginally worse than Ridge when n is small.
b is worth studying in the future. As a
A precise statement on how the estimation error in Q affects A
qualitative assessment of the estimated transition matrices, in Figure 2(d) we plot the eigenvalues in
b obtained by different methods when T = 50 and n = 1600. The
modulus of the true A and the A?s
eigenvalues are sorted according to their modulus. Both Ridge and Lyap severely under-estimate the
eigenvalues in modulus, but Lyap preserves the spectrum much better than Ridge.
3.1.2
Experimental results for the sparse model
We give boxplots of the performance measures in the 5 experiments in Figures 3(a) to 3(c), and the
b in Figure 3(d). The sparse least-square method
eigenvalues in modulus of the true A and some A?s
and the proposed method (9) are abbreviated as Sparse and Lyap, respectively.
We observe the same type of improvement as in the dense model: Lyap improves over Sparse more
significantly when T is small and n is large. But the largest improvement occurs when T = 75, not
the shortest training sequence length T = 25. A major difference lies in the impact of the Lyapunov
b as revealed in Figure 3(d). When T is as small as 25, the sparse
penalization on the spectrum of A,
least-square method shrinks all the eigenvalues but still keep most of them non-zero, while Lyap
with a non-sequence sample of size 1600 over-estimates the first few largest eigenvalues in modulus
but shrink the rest to have very small modulus. In contrast, Lyap with the true Q preserves the
spectrum much better. We may thus need an even better covariance estimate for the sparse model.
7
2
1
Ridge
Lyap
Lyap
Cosine score
1.5
Normalized error
Ridge
0.8
1
0.6
0.4
0.5
0.2
0
(a) The pendulum
T=6
T=10
T=20
T=50
(b) Normalized error
0
T=6
T=10
T=20
T=50
(c) Cosine score
Figure 4: Results on the pendulum video data
3.2
Video Data
We test our methods using a video sequence of a periodically swinging pendulum3 , which consists
of 500 frames of 75-by-80 grayscale images. One such frame is given in Figure 4(a) The period
is about 23 frames. To further reduce the dimension we take the second-level Gaussian pyramids,
resulting in images of size 9-by-11. We then treat each reduced image as a 99-dimensional vector,
and normalize each dimension to be zero-mean and standard deviation 1. We analyze this sequence
with a 99-dimensional first-order VAR model. To check whether a VAR model is a suitable choice,
we estimate a transition matrix from the first 400 frames by ridge regression while choosing the
penalization parameter on the next 50 frames, and predict on the last 50 frames. The best penalization parameter is 0.0156, and the testing normalized error and cosine score are 0.33 and 0.97,
respectively, suggesting that the dynamics of the video sequence is well-captured by a VAR model.
We compare the proposed method (6) with the ridge regression for two lengths of the training sequence: T ? {6, 10, 20, 50}, and treat the last 50 frames as the testing sequence. For both methods,
we split the training sequence into two halves and use the second half as a validation sequence. For
the proposed method, we simulate a non-sequence sample by randomly choosing 300 frames from
between the (T + 1)-st frame and the 450-th frame without replacement. We repeat this 10 times.
The testing normalized errors and cosine scores of both methods are given in Figures 4(b) and 4(c).
For the proposed method, we report the mean performance measures over the 10 simulated nonsequence samples with standard deviation. When T ? 20, which is close to the period, the proposed
method outperforms ridge regression very significantly except when T = 10 the cosine score of
Lyap is barely better than Ridge. However, when we increase T to 50, the difference between the
two methods vanishes, even though there is still much room for improvement as indicated by the
result of our model sanity check before. This may be due to our use of dependent data as the nonsequence sample, or simply insufficient non-sequence data. As for ?1 and ?2 , their values decrease
respectively from 512 and 2,048 to less than 32 as T increases, but since we fix the amount of nonsequence data, the interaction between their value changes is less clear than on the synthetic data.
4
Conclusion
We propose to improve penalized least-square estimation of VAR models by incorporating nonsequence data, which are assumed to be samples drawn from the stationary distribution of the
underlying VAR model. We construct a novel penalization term based on the discrete-time Lyapunov equation concerning the covariance (estimate) of the stationary distribution. Preliminary
experimental results demonstrate that our methods can improve significantly over standard penalized least-square methods when there are only few sequence data but abundant non-sequence data
b
and when the model assumption is valid. In the future, we would like to investigate the impact of Q
t
b in a precise manner. Also, we may consider noise processes ? with more general covariances,
on A
and incorporate the noise covariance estimation into the proposed Lyapunov penalization scheme.
Finally and the most importantly, we aim to apply the proposed methods to real scientific time series
data and provide a more effective tool for those modelling tasks.
3
A similar video sequence has been used in [16].
8
References
[1] P. Antsaklis and A. Michel. Linear systems. Birkhauser, 2005. 2
[2] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA 02178-9998, second edition, 1999. 4
[3] Y. Chen, A. Wiesel, Y. C. Eldar, and A. O. Hero. Shrinkage algorithms for mmse covariance
estimation. IEEE Transactions on Signal Processing, 58:5016?5029, 2010. 5
[4] Y. Chen, A. Wiesel, and A. O. Hero. Robust shrinkage estimation of high-dimensional covariance matrices. Technical report, arXiv:1009.5331v1 [stat.ME], September 2010. 5
[5] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ?1 ball for learning in high dimensions. In Proceedings of the 25th International Conference on
Machine Learning, pages 272?279, 2008. 4
[6] A. Gupta and Z. Bar-Joseph. Extracting dynamics from static cancer expression data.
IEEE/ACM Transactions on Computational Biology and Bioinformatics, 5:172?182, 2008. 2
[7] J. Hamilton. Time series analysis. Princeton Univ Pr, 1994. 2
[8] T.-K. Huang and J. Schneider. Learning linear dynamical systems without sequence information. In Proceedings of the 26th International Conference on Machine Learning, pages
425?432, 2009. 2
[9] T.-K. Huang, L. Song, and J. Schneider. Learning nonlinear dynamic models from nonsequenced data. In Proceedings of the 13th International Conference on Artificial Intelligence
and Statistics, 2010. 2
[10] O. Ledoit and M. Wolf. Improved estimation of the covariance matrix of stock returns with an
application to portfolio selection. Journal of Empirical Finance, 10:603?621, 2003. 5
[11] O. Ledoit and M. Wolf. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis, 88:365?411, 2004. 5
[12] A. Lozano, N. Abe, Y. Liu, and S. Rosset. Grouped graphical granger modeling for gene
expression regulatory networks discovery. Bioinformatics, 25(12):i110, 2009. 1
[13] T. C. Mills. The Econometric Modelling of Financial Time Series. Cambridge University Press,
second edition, 1999. 1
[14] D. Noll, O. Prot, and A. Rondepierre. A proximity control algorithm to minimize nonsmooth
and nonconvex functions. Pacific Journal of Optimization, 4(3):569?602, 2008. 4
[15] J. Sch?afer and K. Strimmer. A shrinkage approach to large-scale covariance matrix estimation
and implications for functional genomics. Statistical Applications in Genetics and Molecular
Biology, 4, 2005. 5
[16] S. M. Siddiqi, B. Boots, and G. J. Gordon. Reduced-rank hidden Markov models. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, 2010.
8
[17] C. Stein. Estimation of a covariance matrix. In Rietz Lecture, 39th Annual Meeting, Atlanta,
GA, 1975. 5
[18] R. S. Tsay. Analysis of financial time series. Wiley-Interscience, 2005. 1
[19] B. P. Tu, A. Kudlicki, M. Rowicka, and S. L. McKnight. Logic of the yeast metabolic cycle:
Temporal compartmentalization of cellular processes. Science, 310(5751):1152?1158, 2005.
1
[20] R. Yang and J. O. Berger. Estimation of a covariance matrix using the reference prior. Annals
of Statistics, 22:1195?1211, 1994. 5
9
| 4482 |@word wiesel:2 norm:3 covariance:34 noll:1 initial:3 liu:1 series:19 score:10 denoting:1 longitudinal:1 mmse:1 outperforms:2 current:2 ka:6 yet:1 belmont:1 periodically:1 enables:1 plot:2 update:1 stationary:18 half:4 intelligence:2 short:1 regressive:5 along:3 sii:1 direct:1 become:2 qualitative:1 consists:1 combine:1 interscience:1 manner:1 expected:1 roughly:1 behavior:1 nor:1 resolve:1 kkxt:1 provided:1 underlying:2 unrelated:1 moreover:1 bounded:1 what:2 minimizes:1 finding:1 temporal:4 pseudo:1 every:1 firstorder:1 ti:1 finance:3 demonstrates:1 k2:1 prot:1 control:1 medical:1 compartmentalization:1 hamilton:1 bertsekas:1 before:2 positive:1 scientist:1 local:2 treat:4 limit:2 severely:1 ak:1 analyzing:2 subscript:1 path:2 black:1 initialization:2 studied:1 collect:3 range:1 unique:1 testing:11 x3:1 procedure:2 empirical:2 significantly:6 projection:6 word:1 cannot:1 close:3 onto:2 selection:1 ga:1 equivalent:1 economics:1 independently:3 convex:5 swinging:1 immediately:1 estimator:11 rule:3 importantly:1 his:1 financial:2 population:1 handle:1 sse:5 coordinate:2 annals:1 target:3 suppose:1 programming:1 us:1 satisfying:1 particularly:1 solved:1 rij:4 region:1 cycle:3 decrease:2 disease:2 mentioned:1 environment:1 vanishes:1 dynamic:16 solving:2 stock:1 train:1 univ:1 describe:2 effective:2 artificial:2 whitenoise:1 choosing:3 hyper:2 interest1:1 sanity:1 quite:4 whose:3 spend:1 larger:2 solve:3 rietz:1 otherwise:3 statistic:4 ledoit:2 itself:1 superscript:1 final:1 sequence:62 eigenvalue:12 kxt:3 propose:2 interaction:1 product:1 tu:1 relevant:1 riccati:1 kak1:4 achieve:2 frobenius:1 normalize:1 ky:4 a11:1 spent:1 illustrate:1 help:1 derive:1 stat:1 measured:1 ij:3 c:2 involves:1 implies:1 lyapunov:23 direction:2 fij:1 viewing:1 require:1 fix:2 preliminary:1 biological:1 underdetermined:1 proximity:1 considered:1 normal:2 predict:2 major:3 vary:2 consecutive:1 adopt:1 estimation:18 bridge:1 sensitive:1 largest:2 grouped:1 tool:3 minimization:1 rough:2 genomic:1 gaussian:3 always:1 aim:3 rather:1 pn:3 parkinson:2 shrinkage:5 varying:1 derived:1 refining:1 improvement:3 modelling:3 bernoulli:1 check:2 rank:1 contrast:1 baseline:2 dependent:1 inaccurate:1 her:1 relation:1 hidden:1 overall:2 among:1 arg:4 ill:1 issue:1 eldar:1 field:1 once:1 construct:1 biology:5 k2f:2 nearly:1 future:3 report:4 nonsmooth:1 gordon:1 few:6 modern:3 randomly:1 preserve:2 individual:1 replacement:1 atlanta:1 interest:4 investigate:2 semidefinite:1 strimmer:1 bundle:1 implication:1 bacteria:1 conduct:1 walk:2 initialized:1 abundant:1 instance:2 column:3 modeling:4 ar:1 deviation:2 entry:1 kq:1 learnt:1 synthetic:6 combined:1 rosset:1 st:1 fundamental:1 international:4 discipline:1 pool:1 squared:5 ambiguity:1 tzu:1 huang:3 slowly:1 choose:4 worse:1 return:1 michel:1 suggesting:1 coefficient:1 view:1 lot:1 analyze:1 pendulum:2 red:1 minimize:1 square:11 ni:3 variance:1 characteristic:1 qk:3 identify:1 marginally:1 worth:1 researcher:4 destructive:1 static:3 proved:1 improves:2 back:1 response:1 improved:1 entrywise:1 formulation:3 shrink:3 though:1 just:1 stage:1 xa:1 correlation:1 replacing:1 nonlinear:2 assessment:1 lack:1 bsp:1 reveal:1 indicated:2 scientific:3 yeast:4 modulus:11 normalized:8 true:18 remedy:1 unbiased:2 former:2 hence:1 regularization:3 lozano:1 symmetric:1 moore:1 white:2 during:1 uniquely:2 kak:2 cosine:10 criterion:1 ridge:30 demonstrate:3 performs:1 duchi:1 image:3 novel:2 recently:3 common:4 functional:1 mellon:2 measurement:4 refer:1 cambridge:1 grid:1 i6:4 similarly:1 schneide:1 portfolio:1 stable:3 afer:1 multivariate:2 own:1 perspective:1 certain:3 nonconvex:1 success:1 meeting:1 devise:1 captured:1 minimum:6 impose:1 schneider:3 determine:2 shortest:1 period:2 signal:2 multiple:5 full:1 shalev:1 smooth:2 technical:1 cross:1 concerning:2 equally:1 molecular:1 impact:2 prediction:2 regression:13 patient:1 cmu:2 chandra:1 arxiv:1 repetitive:2 iteration:2 pyramid:1 robotics:1 cell:2 addition:1 sch:1 rest:1 undergo:1 contrary:2 incorporates:1 effectiveness:1 alzheimer:2 extracting:1 yang:1 intermediate:1 split:2 enough:2 revealed:1 fungi:1 affect:1 fit:1 zi:6 lasso:1 reduce:1 idea:4 whether:1 expression:10 tsay:1 forecasting:1 effort:2 penalty:22 song:1 suffer:1 algebraic:1 useful:4 clear:1 amount:6 stein:1 extensively:1 siddiqi:1 reduced:2 generate:2 nonsequenced:1 estimated:2 per:1 carnegie:2 discrete:3 achieving:1 drawn:5 neither:1 boxplots:2 v1:1 econometric:1 year:1 sum:2 run:2 inverse:1 nonsequence:5 draw:2 capturing:1 quadratic:1 identifiable:1 annual:1 constraint:4 precisely:2 x2:2 simulate:1 min:8 department:1 developing:1 according:2 pacific:1 combination:1 ball:1 mcknight:1 smaller:5 joseph:1 making:1 modification:1 outlier:1 invariant:1 sij:2 pr:1 equation:6 previously:2 turn:3 discus:2 abbreviated:2 granger:1 singer:1 hero:2 studying:2 available:2 apply:2 progression:3 observe:6 sjj:1 appearing:1 softthresholding:1 denotes:6 running:1 include:1 ensure:1 standardized:1 graphical:1 maintaining:1 objective:8 already:2 quantity:2 occurs:1 usual:3 diagonal:1 september:1 gradient:8 simulated:1 athena:1 kak2f:2 originate:1 whom:1 me:1 collected:1 considers:1 trivial:1 barely:1 cellular:1 length:6 index:2 berger:1 insufficient:2 mini:1 minimizing:1 difficult:2 statement:1 trace:2 motivates:1 proper:2 unknown:2 qk2f:1 perform:2 upper:1 boot:1 observation:3 snapshot:2 markov:1 arc:2 descent:6 situation:3 precise:2 frame:10 abe:1 complement:1 pair:1 akf:1 qa:6 able:1 bar:1 usually:4 below:1 dynamical:1 challenge:1 reliable:4 max:4 video:8 suitable:2 difficulty:1 rely:2 synchronize:1 regularized:1 scheme:4 improve:6 technology:1 auto:5 extract:1 genomics:1 prior:1 literature:1 discovery:1 kf:1 evolve:1 synchronization:2 lecture:1 kakf:1 interesting:1 limitation:1 proven:1 var:13 penalization:11 validation:5 sufficient:1 metabolic:3 viewpoint:1 cancer:1 genetics:1 penalized:4 repeat:1 last:2 bern:2 aij:1 understand:1 institute:1 sparse:21 curve:1 dimension:5 transition:7 valid:2 world:1 collection:3 projected:3 simplified:1 far:3 social:1 transaction:2 gene:8 keep:1 logic:1 global:2 conclude:1 assumed:1 xi:1 shwartz:1 spectrum:3 grayscale:1 continuous:1 search:3 regulatory:1 pretty:1 table:1 nature:1 zk:2 robust:5 obtaining:1 mse:1 investigated:2 domain:1 main:1 dense:6 noise:5 hyperparameters:2 profile:1 arise:1 edition:2 x1:1 referred:1 slow:1 wiley:1 shrinking:1 a21:1 candidate:1 lie:1 wkij:3 third:1 bij:3 rk:3 formula:1 xt:14 system2:1 gupta:1 bivariate:1 incorporating:3 exists:1 adding:2 effectively:1 conditioned:2 chen:2 easier:2 mill:1 simply:1 penrose:1 sectional:1 tracking:2 vulnerable:1 mij:2 wolf:2 satisfies:1 acm:1 ma:1 month:1 goal:1 sorted:1 towards:1 jeff:1 room:1 feasible:1 change:1 determined:4 typical:1 specifically:1 except:1 birkhauser:1 called:1 specie:1 kuo:1 experimental:3 latter:3 armijo:2 bioinformatics:2 incorporate:3 evaluate:1 princeton:1 |
3,847 | 4,483 | Multiple Instance Learning on Structured Data
1
Dan Zhang, 2 Yan Liu, 1 Luo Si, 3 Jian Zhang, 4 Richard D. Lawrence
1. Computer Science Department, Purdue University, West Lafayette, IN 47906
2. Computer Science Department, University of Southern California, Los Angeles, CA 90089
3. Statistics Department, Purdue University, West Lafayette, IN 47906
4. Machine Learning Group, IBM T.J. Watson Research Center, Yorktown Heights, NY 10598
1
{zhang168, lsi}@cs.purdue.edu, 2 [email protected], 3 [email protected] , 4 [email protected]
Abstract
Most existing Multiple-Instance Learning (MIL) algorithms assume data instances
and/or data bags are independently and identically distributed. But there often
exists rich additional dependency/structure information between instances/bags
within many applications of MIL. Ignoring this structure information limits the
performance of existing MIL algorithms. This paper explores the research problem as multiple instance learning on structured data (MILSD) and formulates a
novel framework that considers additional structure information. In particular,
an effective and efficient optimization algorithm has been proposed to solve the
original non-convex optimization problem by using a combination of ConcaveConvex Constraint Programming (CCCP) method and an adapted Cutting Plane
method, which deals with two sets of constraints caused by learning on instances
within individual bags and learning on structured data. Our method has the nice
convergence property, with specified precision on each set of constraints. Experimental results on three different applications, i.e., webpage classification, market
targeting, and protein fold identification, clearly demonstrate the advantages of
the proposed method over state-of-the-art methods.
1
Introduction
Multiple Instance Learning (MIL) is a variation of the classical learning methods for problems with
incomplete knowledge on the instances (or examples) [4]. In a MIL problem, the labels are assigned
to bags, i.e., a set of instances, rather than individual instances [1, 4, 5, 13]. MIL has been widely
employed in areas such as text mining [1], drug design [4], and localized content based image
retrieval (LCBIR) [13].
One major assumption of most existing MIL methods is that instances (and bags) are independently
and identically distributed. But in many applications, the dependencies between instances/bags naturally exist and if incorporated in models, they can potentially improve the prediction performance
significantly. For example, in business analytics, big corporations often analyze the websites of different companies to look for potential partnerships. Since not all of the webpages in a website are
useful, we can treat the whole website of a specific company as a bag, and each webpage in this
website is considered as an instance. The hyperlinks between webpages provide important information on various relationships between these companies (e.g. supply-demand or joint-selling) and
more partner companies can be identified if we follow the hyperlinks of existing partners. Another
example is protein fold identification [15], whose goal is to predict protein fold with low conservation in primary sequence, e.g., Thioredoxin-fold (Trx-fold). MIL algorithms have been applied
to identify new Trx-fold proteins, where each protein sequence is considered as a bag, and some of
its subsequences are instances. The relational information between protein sequences, such as same
organism locations or similar species origins, can be used to help the prediction tasks.
1
Several recent methods have been proposed to model the dependencies between instances in each
bag [11, 12, 21]. However, none of them takes into consideration the relational structure between
bags or between instances across bags. Furthermore, most of existing MIL research only uses content similarity for modeling structures between instances in each bag, but does not consider other
types of relational structure information (e.g. hyperlink) among instances or bags. While much research work [16, 22] for traditional single instance learning has demonstrated that additional structure information (e.g., hyperlink) can be very useful, we believe this is similar for MIL.
Generally speaking, we summarize three scenarios of the structure information in MIL: (1) the
relational structures are on the instance level. For example, in the business partner example, the hyperlinks between different webpages can be considered as the relational structure between instances
(either in the same bag or across bags). (2) the structure information is available on the bag level.
For example, in protein fold identification task, we can consider the phylogenetic tree to capture the
evolutionary dependencies between protein sequences . (3) the structure information is available on
both instance level and bag level. We refer these three scenarios of learning problems collectively
as multiple instance learning on structured data (MILSD).
In this paper, we propose a general framework that address all three structure learning scenarios
for MIL. The model consists of a regularization term that confines the capacity of the classifier,
a term that penalizes the difference between the predicted labels of the bags and their true labels,
and a graph regularization term based on the structure information. The corresponding optimization
problem is non-convex. But we show that it can be expressed as the difference between two convex
functions. Then, we employ an iterative method ? Constrained Concave-Convex Procedure (CCCP)
[14, 19] to solve this problem. To make the proposed method scalable to large datasets, the Cutting
Plane method [8] is adapted to solve the subproblems derived from each CCCP iteration. The novelty
of the proposed variant of Cutting Plane method lies in modeling dual sets of constraints, i.e., one
from modeling instances in individual bags, and the other from the structure information, and its
ability to control the precisions (i.e., 1 and 2 in Table 1) on different sets of constraints separately.
The reason why we need to control precisions separately is that since different sets of constraints
normally are derived from various sources and have different forms, their characteristics, as well as
the required optimization precisions, are very likely to be diverse. Furthermore, we prove an upper
bound of the convergence rate of the proposed optimization method, which is a significant result
given our optimization scheme for dual constraint sets can also be applied to many other learning
problems. Experiments on three applications demonstrate the advantages of the proposed research.
2
2.1
Methodology
Problem Statement and Notation
Suppose we are given a set of n labeled bags {(Bi , Yi ), i = 1, 2, ? ? ? , n}, u unlabeled bags
{Bi , i = n + 1, n + 2, ? ? ? , n + u}, and a directed or undirected graph G = (V, E) that depicts
the structure between either bags or instances. Here, the instances in the bag Bi are denoted as
{Bi1 , Bi2 , ..., Bini } ? X , where ni is the total number of instances in this bag and Yi ? {?1, 1}.
Each node v ? V corresponds to either a bag or an instance in either the labeled set or the unlabeled
set, and the j-th edge ej = (p, q) ? E represents a link from node p to node q. The task is to
learn a classifier w1 based on labeled, unlabeled bags, and the predefined structure graph so that the
unlabeled bags can be correctly classified. The soft label for the instance x can be estimated by:
f (x) = wT Bij . The soft label of the bag Bi can be modeled as: f (Bi ) = maxj?Bi wT Bij , and if
f (Bi ) > 0, this bag would be labeled as positive and otherwise negative.
2.2
Formulation
Our motivation is that labeled bags should be correctly classified and the soft labels of the bags or
instances defined on the graph G should be as smooth as possible. Specifically, a pair of nodes
linked by an edge tend to possess the same label and therefore the nodes lying on a densely linked
subgraph are likely to have the same labels [20]. The general formulation of MILSD is given as:
1
Without loss of generality, in this paper, we only consider linear classifiers. Here, the bias of the classifier
is absorbed by the feature vectors. The kernel version [3] of the proposed method can be easily derived.
2
minw Hr (w) + Hd (w) + HG (w), where Hr (w) is a regularization term based on w, and depicts
the capacity of this classifier. One of the possible options, which is also the one used in this paper,
is kwk2 . Hd (w) penalizes the difference between the estimated bag labels and the given labels. In
this paper, without
is used [3]. So, given a classifier w, Hd (w) is
Pnloss of generality, the hinge loss
T
calculated as: C
max{0,
1?max
Y
w
B
j?B
i
ij }, where C is the trade off parameter. HG (w)
i
i=1
n
is a graph regularization term based on the given graph G that enforces the smoothness
on the soft
P
f (vp )
f (vq )
?
labels of nodes in the given graph, which can be defined as: |E| (p,q)?E w(p, q) ?
??
,
d(p)
d(q)
where vp and vq are two nodes in the graph. w(p, q) is a weight function that measures the weight
on the edge (p, q). d(p) and d(q) are the outgoing degrees for the node vp and vq respectively
[20]. |E| is the number of edges in graph G. Depending on which one of the three scenarios the
graph is defined, we name the formulation where the graph is defined on instances as I-MILSD, the
formulation where the graph is defined on bags as B-MILSD, and the formulation where the graph
is defined on both bags and instances as BI-MILSD. In particular,
1. For I-MILSD, HG (w) can be defined as
?
|E|
wT xp
wT xq
?
?
w(p,
q)
?
(p,q)?E
, where xp
P
d(p)
d(q)
and xq are two instances.
maxj?Bp wT Bpj
?
?
2. For B-MILSD, HG (w) is defined as
w(p, q)
d(p)
is defined. Then, the B-MILSD problem can be formulated as follows:
P
?
(p,q)?E
|E| ?
min
w
s.t.
?
d(q)
n
T
max
X
maxj?Bq wT Bqj
1
CX
?
j?Bp w Bpj
2
p
p
kwk +
?i +
?
?
w(p, q)
2
n i=1
|E|
d(p)
d(q)
(p,q)?E
?i ? {1, 2, . . . , n},
Yi max wT Bij ? 1 ? ?i ,
j?Bi
(1)
where ?i is the hinge loss and ? is the trade-off parameter for the graph term. The formulation proposed in [1] is a special case of the proposed formulation, with ? equals zero.
3. The definition of HG (w) in BI-MILSD can be considered as a combination of previous
two formulations.
In the following sections, our focus will be on the more challenging problem as B-MILSD, while IMILSD and BI-MILSD can be solved in a similar way, since the HG (w) in I-MILSD is convex and
the formulation of BI-MILSD can be considered as a combination of the B-MILSD and I-MILSD.
2.3
Optimization Procedure with CCCP and Multi-Constraint Cutting Plane
The formulation in problem (1) combines both the goodness-of-fit for labeled bags and the structure
information embedded in the graph. However, since both HG (w) and the constraints in problem
(1) are non-convex, the global optimal solution of this problem cannot be attained. To solve this
problem, the constrained concave-convex procedure (CCCP) is used. It is an optimization method
that deals with the concave convex objective function with concave convex constraints [14]. In this
paper, without loss of generality, we only assume w(p, q) to be a canonical weight function. To employ CCCP, first of all, for each edge (p, q), a non-negative loss variable ?(p,q) is introduced. Then,
problem (1) can be solved iteratively. In particular, given an initial point w(0) , CCCP iteratively
computes w(t+1) from w(t) 2 by replacing maxj?Bi wT Bij with its first order Taylor expansions
at w(t) , and solving the resulting quadratic programming problem as follows, until convergence
(t)
(ui = arg maxj?{1,...,ni } (w(t) )T Bij ).
2
w
The superscript t is used to denote that the result is obtained from the t-th CCCP iteration. For example,
is the optimized classifier from the t-th CCCP iteration step.
(t)
3
maxj?Bq wT Bqj
n
CX
?
1
kwk2 +
?i +
2
n i=1
|E|
min
w,?i ?0,?(p,q) ?0
s.t.
X
?(p,q)
(2)
(p,q)?E
Yi wT Biu(t) ? 1 ? ?i
?i ? {1, 2, . . . , n},
i
wT Bqu(t)
wT Bpk
q
?k ? {1, 2, . . . , np }, p
? p
? ?(p,q)
d(p)
d(q)
?(p, q) ? E,
wT Bpu(t)
wT Bq(k?np )
p
p
?k ? {np + 1, . . . , np + nq },
? p
? ?(p,q) .
d(q)
d(p)
The problem (2) can be directly solved as a standard quadratic programming problem [2]. However,
in many real world applications, the number of the labeled bags as well as the number of links
between bags are huge. In this case, we would need to find a way that can solve this problem
efficiently. Instead of directly solving this optimization problem, we employ the Cutting Plane
method [8], which has shown its effectiveness and efficiency in solving similar tasks recently [6].
But different from the method employed in [6], in this paper, we need to deal with two sets of
constraints, rather than just one constraint set, with specified precisions separately. A new way to
adapt the Cutting Plane method is devised here. Problem (2) is equivalently transformed to the
following form:
min
w,??0,??0
1
kwk2 + C? + ??
2
(3)
n
n
1X
1 TX
ci Yi Biu(t) ?
w
ci ? ?
n
n i=1
i
i=1
s.t. ?c ? {0, 1}n ,
?(? ? {0, 1}|E|?(np +nq ) )
np +nq
X
\
(?ej ? E,
?jk ? 1)
k=1
|E|
T X
w
|E|
j=1
np
nq
Bqu(t)
Bpu(t)
X
Bpk
Bqk
q
p
?jk ( p
?p
)+
?j(k+np ) ( p
?p
)
d(p)
d(q)
d(q)
d(p)
k=1
k=1
X
!
? ?,
where, ej = (p, q), ? is a matrix with |E| rows and a varying number of columns: for the j-th row
of ? , it has np + nq columns (possible constraints). For each edge, at most one constraint could be
activated for each feasible ? .
Theorem
Any solution w? of
is also a solution to problem (3) (and vice versa) with
P1:
Pproblem (2)
n
1
1
?
?
3
? = n i=1 ?i? and ? ? = |E|
(p,q)?E ?(p,q) .
Proof: Please refer to the supplemental materials in the author?s homepage.
The benefit of making this transformation is that, as we shall see later, during each Cutting Plane iteration at most two constraints will be added and therefore the final solution would be extremely
sparse, with the number of non-zero dual variables independent of the number of training examples. Now the problem turns to how to solve the problem (3) efficiently, which is convex,
but contains two sets of exponential number of constraints due to the large number of feasible c
and ? . We present a novel adaption of the Cutting Plane method that can handle the two sets
of constraints simultaneously. More specifically, the main motivation of the method proposed
here is to find two small subsets of constraints, i.e., ?1 and ?2 from constraint sets in Eq.(3).
With these two sets of selected constraints, the solution of the corresponding relaxed problem
satisfies all the constraints from problem (3) up to two precisions 1 and 2 , i.e., ?c ? {0, 1}n :
1
wT
n
Pn
1
wT
|E|
1):
ci Yi Biu(t) ?
i
P|E| Pnp
i=1
j=1
k=1
1
n
Pn
i=1
Pnp +nq
(?ej ? E, k=1
?jk ?
B (t)
pu
? ? p ) ? (? + ?2 ). It
ci ? (? + ?1 ) and ?(? ? {0, 1}|E|?(np +nq ) )
B
(t)
B
qu
?jk ( ? pk ? ? q ) +
d(p)
d(q)
Pnq
k=1
B
?j(k+np ) ( ? qk
d(q)
T
d(p)
indicates that the two remaining sets of constraints (that are not added to ?1 and ?2 ) will not be
violated up to two precisions ?1 and ?2 respectively, and therefore they do not need to be added to
?1 and ?2 explicitly.
3
the subscript ? denotes the optimal value of the corresponding variable.
4
The proposed method constructs ?t1 and ?t2 iteratively, which starts from two empty sets ?t10 4
and ?t20 respectively. During the s-th Cutting Plane iteration, based on the wts , the most violated
constraint for ?t1s can be computed as:
(
ctis =
1,
if Yi (wts )T Biu(t) < 1
0,
otherwise
i
,
(4)
and the most violated constraint for ?t2s can be computed as:
ts
?jk
(
)
?
Bqu(t)
Bpu(t)
\
?
Bpk
q
p
ts T
ts T Bq(k?np )
? 1, if(k = k? ) (max
max (w ) ( p
?p
),
max
(w ) ( p
?p
) > 0)
k?{1,...,np }
=
d(p)
d(q) k?{np +1,...,np +nq }
d(q)
d(p)
?
?
0, otherwise,
(5)
B (t)
B (t)
Bq(k?n )
B
qu
pu
k? = arg maxk maxk?{1,...,np } (wts )T ( ? pk ? ? q ), maxk?{np +1,...,np +nq } (wts )T ( ? p ? ? p ) .
d(p)
d(q)
d(q)
d(p)
After calculating these two sets of most violated constraints, the two stopping conditions can be
computed:
H1ts =
H2ts
!
n
n
(wts )T X ts
1 X ts
ci Yi Biu(t) ?
ci ? (? ts + ?1 ) ,
n
n i=1
i
i=1
(6)
?
?
np
np +nq
|E|
Bqu(t)
Bpu(t)
X
X
X
(wts )T
B
B
pk
qk
q
p
ts
ts
?
=(
?
?jk
(p
?p
)+
?j(k?n
(p
?p
)? ? (? ts + ?2 )).
p)
|E|
d(p)
d(q)
d(q)
d(p)
j=1
k=1
k=np +1
(7)
The Cutting Plane iteration will terminate if both conditions H1ts and H2ts are true. Otherwise, cts
will be added to ?t1s if H1ts is false, and ? ts will be added to ?t2s if H2ts is false. Then, the new
optimization problem turns to:
min
w,??0,??0
s.t.
1
kwk2 + C? + ??
2
n
n
1X
1 TX
ci Yi Biu(t) ?
?c ? ?t1s ,
w
ci ? ?
n
n i=1
i
i=1
?? ?
?t2s ,
np
nq
|E|
Bqu(t)
Bpu(t)
X
Bpk
wT X X
Bqk
q
p
?jk ( p
?p
)+
?j(k+np ) ( p
?p
)
|E| j=1
d(p)
d(q)
d(q)
d(p)
k=1
k=1
(8)
!
? ?.
This optimization problem can be solved efficiently through the dual form [2].
2.4
Analysis and Discussions
The whole algorithm of B-MILSD is described in Table 1. Here, J t = 12 kw(t) k2 +C? (t) +?? (t) . The
convergence of the proposed method is guaranteed. Given an initial w, the outer CCCP iteration has
already been proved to converge to a local optimal solution [14]. The final solution can be improved
by running this algorithm several times and picking the solution with the smallest J (t) value. We
will show that the Cutting Plane iterations with two different sets of constraints converge in a fixed
number of steps through the following two theorems.
Theorem 2: For each Cutting Plane iteration described in Table 1, the objective function of (8) will
2
22
(1 +2 )2
?
be increased by at least ? = min{ C2 1 , 8R12 , ?22 , 16R
}, where R2 = maxi,j B2ij .
2,
(24+16 2)R2
Sketch of Proof: The detailed proof of this theorem can be found in the supplemental materials.
Here, we only briefly outline the way how we proved it. In each Cutting Plane iteration described
in Table 1, there are three possibilities for updating the constraints. In each case, we will find a
feasible direction for increasing the objective function. A line search method will then be used to
4
Here, ts denotes the s-th Cutting Plane iteration for solving the problem from the t-th CCCP iteration.
5
Table 1: The description of B-MILSD
Input: 1. Labeled bags: {(Bi , Yi ), i = 1, 2, ? ? ? , n}; 2. Unlabeled Bags: {Bi , i = n + 1, 2, ? ? ? , n + u}; 3. A graph G which represents the
relationship between these bags. (The graph can be built solely on the labeled bags, or on an union of both the labeled bags and the unlabeled bags); 4. parameters:
loss weight C and ?, CCCP solution precision ?, Cutting Plane solution precision for constraint1: 1 , Cutting Plane solution precision for constraint2: 2 .
Output: The classifier w.
CCCP Iterations:
1. Initialize w0 ,t=0,?J = 103 , J ?1 = 103 .
2. while ?J/J t?1 > ? do
t
t
3. Derive problem (2). Set the constraint set ?10 = ?, ?20 = ? and s = ?1.
Cutting Plane Iterations:
4.
repeat
5.
s = s + 1.
6.
Get (w(ts ) , ?(ts ) , ? (ts ) ) by solving (8).
7.
Compute the most constraints, i.e., cts , and ? ts by Eq.(4) and Eq.(5).
t
t
8.
Compute the stopping criteria, i.e., H1s , and H2s by Eq.(6) and Eq.(7).
t(s+1)
t(s+1)
t
t
t
t
t
t S ts
t
t S ts
if H2s is
?
= ?1s . Update ?2s by ?2s+1 = ?2s
c , if H1s is false. Otherwise, ?1
9.
Update ?1s by ?1
= ?1s
t(s+1)
t
false. Otherwise, ?2
= ?2s .
t
t V
H2s is false
10.
while H1s
11. t = t + 1.
12. w(t) = w(t?1)s , ?(t) = ?(t?1)s , and ? (t) = ? ((t?1)s .
13. ?J = J t?1 ? J t .
14.end while
find the optimal increment, which serves as the lower bound for each updating. (1) H1ts is false.
H2ts is true. cts is added to ?t1s . The minimal improvement of the objective function for problem
2
(8) after this constraint is added would be min{ C2 1 , 8R12 }. (2) H1ts is true. H2ts is false. ?t2s
2
2
is updated by appending ? ts . In this case, the minimal increment will be min{ ?22 , 16R
2 }. (3)
ts
ts
ts
Both H1 and H2 are false. The most violated constraints are added to both ?1 and ?t2s . We
(1 +2 )2
1 +2 )
?
}. By integrating all of
proved that the minimal increment is min{ min{C,?}(
, (24+16
2
2)R2
these three cases, it is clear that for each Cutting Plane iteration, the minimal increment is ? =
2
22
(1 +2 )2
1 +2 )
?
min{ C2 1 , 8R12 , ?22 , 16R
}, since min{ C2 1 , ?22 } ? min{C,?}(
.
2,
2
(24+16 2)R2
Theorem 3: The proposed Cutting Plane iteration terminates after at most
min{ C2 1 ,
21
22
(1 +2 )2
?2
?
8R2 , 2 , 16R2 , (24+16 2)R2 },
2
C
?
steps, where, ? =
maxi,j B2ij .
and R =
Proof: w = 0, ? = 1, ? = 0 is a feasible solution for problem (3). Therefore, the objective function
of (3) is upper bounded by C, and should be lower bounded by 0. Given the conclusion from
Theorem 2, it is clear that the Cutting Plane iteration will terminate within C? steps.
The Cutting Plane method has already been employed in several previous works. In [6, 7, 17], the
authors adapted the Cutting Plane method to accelerate structural SVM related algorithms. However,
these works do not explicitly consider the case when several different sets of constraints with specified precisions are involved. The novelty of the proposed method lies on its ability to control these
optimization precisions separately, and meanwhile it still enjoys the sparseness of the final solution
with respect to the number of dual variables, which is brought by slack variable transformation.
In [18], the authors solved the problem of structural SVM with latent variables by employing CCCP
and the bundle method [9]. MIL problem itself can be considered as a special case of latent variable
problem. But the major limitation of [18] is that they cannot incorporate the relational information
into the formulation, and therefore cannot be used here. Furthermore, [18] does not consider dual
sets of constraints in optimization, which is less appropriate than the proposed optimization method.
3
Experiments
3.1
Webpage Classification
In webpage classification, each webpage can be considered as a bag, and its corresponding passages
represent its instances [1]. The hyperlinks between different webpages are treated as the additional
relational structure/links between different bags. WebKB5 dataset is used in experiments. There are
5
http://www.cs.cmu.edu/?webkb/
6
Figure 1: Classification and CPU Time Comparisons
Course
ASCOT
Student
Faculty
0.5
1
0.95
0.9
0.95
0.8
0.85
0.8
B?MILSD
LC?MF
I?miSVM
B?miSVM
0.7
0.1
0.2
0.3
Training Ratio
0.4
0.5
0
0.1
(a)
400
200
500
0.4
0.5
(e)
0.1
0.1
0.5
0.2
B?MILSD
LC?MF
I?miSVM
B?miSVM
1000
300
200
0.3
Training Ratio
0.4
0.5
0.4
0.5
(d)
ASCOT
Student
400
0
0
0.4
(c)
200
B?MILSD
LC?MF
I?miSVM
B?miSVM
800
600
400
150
B?MILSD
LC?MF
I?miSVM
B?miSVM
100
50
200
100
0.2
0.3
Training Ratio
0.2
0.3
Training Ratio
0.15
1200
Time (in Seconds)
600
Time (in Seconds)
Time (in Seconds)
600
0.1
0.1
Faculty
800
0
0
0
0.5
700
B?MILSD
LC?MF
I?miSVM
B?miSVM
1000
0.4
0.2
B?MILSD
LC?MF
I?miSVM
B?miSVM
0.7
(b)
Course
1200
0.2
0.3
Training Ratio
0.3
0.25
0.75
B?MILSD
LC?MF
I?miSVM
B?miSVM
0.75
0.8
Time (in Seconds)
0.75
B?MILSD
LC?MF
I?miSVM
B?miSVM
0.35
AUC
0.85
Accuracy
0.85
Accuracy
Accuracy
0.4
0.9
0.9
0.65
0
0.45
0.1
0.2
0.3
Training Ratio
0.4
0
0
0.5
(f)
0.1
0.2
0.3
Training Ratio
(g)
0.4
0.5
0
0.1
0.2
0.3
Training Ratio
(h)
in total 8280 webpages in this dataset. The webpages without any incoming and outgoing links are
deleted, and 6883 webpages are left. The three most frequently appeared categories, i.e., student,
course, and faculty, are used for classification, where each sub-dataset contains all of the webpages/bags from one of the three categories, and the same number of the negative bags randomly
sampled from the remaining six categories in WebKB. The hyperlinks between these webpages are
used as the structure/link information. The tf-idf (normalized term frequency and log inverse document frequency) [10] features are extracted for each passage, and the stop words are removed. We
use porter as the stemmer.
In the proposed method, C and ? are set by 5-fold cross validation through the grids 2[?5:5] and
[0, 0.01, 0.1, 1] respectively on the training set. To show the effects of the structure information on
the performance of MIL methods, we compare the proposed method with the instance-based multiple instance support vector machine (I-miSVM) as well as the bag-based multiple instance support
vector machine (B-miSVM) [1]. The formulation of these two methods can be considered as a special case of the proposed method with ? equals zero, and they are two different heuristic iterative
ways of implementing the same formulation. Their parameters are also set by 5 fold cross validations. Link-Content Matrix Factorization (LC-MF) is a non-MIL matrix factorization method [22],
which has been shown to outperform several alternatives, including SVM. We conduct experiments
with LC-MF based on the single instances that we extract for the same set of examples, and the
corresponding links. Similar to [22], the number of latent factors is set to be 50. After computing
the latent factors, a linear SVM is trained on the training set with the hinge loss parameter C being
determined by using 5-fold cross validation. For each experiment, a fixed ratio of bags are chosen
as the training set, while the remaining examples are treated as the testing set. The average results
over 20 independent runs are reported on the training ratios [0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5].
The classification results are reported in Fig.1(a)(b)(c) and the CPU time comparison results are
announced in Fig.1(e)(f)(g). In Table 2, we further report the performances when the training ratio
equals 0.2. From these experimental results, it is clear that the performance of the proposed method
is better than the other comparison methods in accuracy and its CPU time is comparatively low.
3.2
Market Targeting
Market targeting is a popular topic for big corporations. Its basic objective is to automatically
identify potential partners. One of the feasible market targeting strategy is to analyze the websites of
the potential partners. But usually not all of the webpages are useful for partner identification. So,
it is better to formulate it as a MIL problem, in which each website is considered as a bag, and its
7
associated webpages are considered as instances. Two related companies may be connected through
hyperlinks in some of their webpages.
We obtained a dataset (ASCOT) from a big international corporation. In ASCOT, the webpages of
around 225 companies are crawled. 25 of the companies/bags are labeled as positive, since they
are partners of this corporation, while the remaining 200 companies/bags are treated as negative
ones. For each company, the webpages with less than 100 unique words are removed and at most
50 webpages with the largest number of unique words6 are selected as instances. The hyperlinks
between webpages of different companies are treated as the structure information. For each experiment, we fix the training ratio of positive and negative bags, while the remaining bags are considered
as the testing set. The averaged results over 20 independent runs are reported on the training ratios
[0.1, 0.2, 0.3, 0.4, 0.5]. The parameters for different methods are tuned in the same way as on WebKB. But for the ratios 0.1 and 0.2, we use 3-fold cross validation due to the lack of positive bags.
For LC-MF, experiments are conducted on the instances which are the averages of the instances in
each bag. Because of the extremely imbalanced nature of this dataset, the Area Under Curve (AUC)
is used as the measure criteria.
The corresponding results are reported in Fig.1(d) and Fig.1(h). In Table 2, we report the performances when the training ratio equals 0.2. On this dataset, B-MILSD performs much better than
the comparison methods, especially when the ratio of training examples is low. This is because the
hyperlink information helps a lot when the content information is rare in MIL, and the MIL setting
is useful to eliminate the useless instances especially when the supervised information is scare.
3.3
Protein Fold Identification
In protein fold identification [15], the low conservation of primary sequence in protein superfamilies
such as Thioredoxin-fold (Trx-fold) makes conventional modeling methods, such as Hidden Markov
Models difficult to use. MIL can be used to identify new Trx-fold proteins naturally, in which each
protein sequence is considered as a bag, and some of its subsequences are considered as instances.
Here, we use a benchmark protein dataset7 . In each protein?s primary sequence, first of all, the
primary sequence motif (typically CxxC) are found. Then, a window of size 214 around it are
extracted and aligned. These windows are then mapped to a 8-dimensional feature space. The
similarities between different proteins are estimated by using clustalw8 . If the score between a pair
of proteins exceed 25, then we consider there exists a link between them.
Following the experiment setting in [15], we conduct 5 fold cross validation to test the performances.
The averaged classification accuracies and CPU Running Time are reporeted in Table 2. From the
comparison methods, we can see that on this dataset, the proposed method is both efficient and
effective. Its CPU running time is almost 10 ? 100 times faster than the comparison methods.
Table 2: Performance Comparisons
Course
Faculty
Student
4
Measure
Accuracy (%)
Time (seconds)
Accuracy (%)
Time (seconds)
Accuracy (%)
Time (seconds)
B-MILSD
97.2
49.1
95.2
73.9
92.7
245.7
LC-MF
95.9
648.5
95.3
360.6
91.7
526.3
I-miSVM
94.3
23.9
93.3
29.7
89.5
41.2
B-miSVM
94.5
95.6
93.4
591.6
89.1
540.4
ASCOT
Protein
Measure
AUC
Time (seconds)
Accuracy (%)
Time (seconds)
B-MILSD
0.350
76.0
96.2
1.7
LC-MF
0.248
56.4
95.2
16.9
I-miSVM
0.264
20.9
92.2
160.3
Conclusions
This paper presents a novel machine learning problem ? multiple instance learning on structured
data (MILSD) for incorporating additional structure information into multiple instance learning.
In particular, a general framework of MILSD is proposed for dealing with the additional structure
information in different scenarios. An effective and efficient optimization method is proposed for
MILSD by combining the CCCP method and a new multi-constraint Cutting Plane method. Some
theoretical results are proved to justify the methodology that we employed to handle multi-sets of
6
Still, we use porter as the stemmer and have removed the stop words.
http://cse.unl.edu/? qtao/datasets/mil dataset Trx protein.html
8
http://www.ebi.ac.uk/Tools/msa/clustalw2/
7
8
B-miSVM
0.230
20.7
82.7
73.8
constraints with the Cutting Plane method. The experimental results on three different applications
clearly demonstrate the advantages of the proposed method. For future work, we plan to adapt the
current framework to solve multi-view multiple instance learning on structured data.
Acknowledgement: The work of Dan Zhang and Luo Si was partially supported by NSF research
grant IIS-0746830, CNS-1012208, IIS-1017837, and the Center for Science of Information (CSoI),
an NSF Science and Technology Center, under grant agreement CCF-0939370. The work of Yan Liu
was partially sponsored by the U.S. Defense Advanced Research Projects Agency (DARPA) under
the Anomaly Detection at Multiple Scales (ADAMS) program, Agreement Number W911NF-11C-0200. The authors would also like to express their sincere thanks to Prof. S.V.N. Vishwanathan
and the anonymous reviewers for their constructive suggestions.
References
[1] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance
learning. In NIPS, 2003.
[2] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ Press, 2004.
[3] B.Scholkopf and A.Smola. Learning with Kernels. MITPress, Cambridge, MA, 2002.
[4] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Perez. Solving the multiple instance problem
with axis-parallel rectangles. In Artificial Intelligence, 1998.
[5] T. G?artner, P.A. Flach, A. Kowalczyk, and A.J. Smola. Multi?instance kernels. In ICML, 2002.
[6] T. Joachims. Training linear SVMs in linear time. In KDD, 2006.
[7] T. Joachims, T. Finley, and C.N.J. Yu. Cutting-plane training of structural SVMs. Machine
Learning, 2009.
[8] JE Kelley Jr. The cutting-plane method for solving convex programs. JSIAM, 1960.
[9] Krzysztof C. Kiwiel. Proximity control in bundle methods for convex nondifferentiable minimization. Math. Program., 46:105?122, 1990.
[10] Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schtze. Introduction to Information Retrieval. Cambridge University Press, 2008.
[11] Amy McGovern and David Jensen. Identifying predictive structures in relational data using
multiple instance learning. In ICML, 2003.
[12] G.J. Qi, X.S. Hua, Y. Rui, T. Mei, J. Tang, and H.J. Zhang. Concurrent multiple instance
learning for image categorization. In CVPR, 2007.
[13] R. Rahmani and S.A. Goldman. MISSL: Multiple-instance semi-supervised learning. In ICML,
2006.
[14] A.J. Smola, SVN Vishwanathan, and T. Hofmann. Kernel methods for missing variables. In
AISTATS, 2005.
[15] Qingping Tao, Stephen D. Scott, N. V. Vinodchandran, and Thomas Takeo Osugi. Svm-based
generalized multiple-instance learning via approximate box counting. In ICML, 2004.
[16] Benjamin Taskar, Carlos Guestrin, and Daphne Koller. Max-margin markov networks. In
NIPS, 2003.
[17] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured
and interdependent output variables. JMLR, 2006.
[18] Chun-Nam John Yu and T. Joachims. Learning structural svms with latent variables. In ICML,
2009.
[19] A. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 2003.
[20] D. Zhou, J. Huang, and B. Scholkopf. Learning from labeled and unlabeled data on a directed
graph. In ICML, 2005.
[21] Z-H Zhou, Y-Y Sun, and Y-F Li. Multi-instance learning by treating instances as non i.i.d.
samples. In ICML, 2009.
[22] S-H Zhu, K. Yu, Y. Chi, and Y-H Gong. Combining content and link classification using matrix
factorization. In SIGIR, 2007.
9
| 4483 |@word briefly:1 faculty:4 version:1 flach:1 bpu:5 initial:2 liu:2 contains:2 score:1 tuned:1 document:1 existing:5 current:1 com:2 luo:2 si:2 gmail:1 takeo:1 john:1 kdd:1 hofmann:3 treating:1 sponsored:1 update:2 intelligence:1 selected:2 website:6 nq:11 plane:26 math:1 node:8 location:1 cse:1 zhang:5 daphne:1 height:1 phylogenetic:1 supply:1 scholkopf:2 consists:1 prove:1 artner:1 dan:2 combine:1 kiwiel:1 pnp:2 market:4 p1:1 frequently:1 multi:6 chi:1 company:10 automatically:1 cpu:5 goldman:1 window:2 increasing:1 project:1 webkb:3 notation:1 bounded:2 homepage:1 hinrich:1 supplemental:2 t1s:4 transformation:2 corporation:4 concave:5 classifier:8 k2:1 uk:1 control:4 normally:1 grant:2 positive:4 t1:1 local:1 treat:1 limit:1 subscript:1 solely:1 challenging:1 factorization:3 analytics:1 bi:15 averaged:2 lafayette:2 directed:2 unique:2 enforces:1 testing:2 union:1 procedure:4 mei:1 area:2 yan:2 drug:1 significantly:1 t10:1 boyd:1 word:3 integrating:1 mitpress:1 protein:19 altun:1 get:1 cannot:3 targeting:4 unlabeled:7 tsochantaridis:2 www:2 conventional:1 demonstrated:1 center:3 reviewer:1 missing:1 independently:2 convex:14 sigir:1 formulate:1 identifying:1 amy:1 vandenberghe:1 nam:1 hd:3 handle:2 variation:1 increment:4 updated:1 suppose:1 anomaly:1 programming:3 us:1 origin:1 agreement:2 jk:7 updating:2 labeled:12 taskar:1 solved:5 capture:1 connected:1 sun:1 trade:2 removed:3 benjamin:1 agency:1 ui:1 trained:1 solving:7 predictive:1 yuille:1 efficiency:1 selling:1 easily:1 joint:1 accelerate:1 darpa:1 various:2 tx:2 univ:1 effective:3 artificial:1 mcgovern:1 whose:1 heuristic:1 widely:1 solve:7 cvpr:1 otherwise:6 ability:2 statistic:1 itself:1 superscript:1 final:3 advantage:3 sequence:8 propose:1 aligned:1 combining:2 bqj:2 subgraph:1 t2s:5 description:1 ebi:1 los:1 webpage:21 convergence:4 empty:1 rangarajan:1 prabhakar:1 msa:1 categorization:1 adam:1 help:2 depending:1 derive:1 ac:1 gong:1 andrew:1 ij:1 eq:5 c:3 predicted:1 direction:1 raghavan:1 material:2 implementing:1 fix:1 anonymous:1 bi1:1 lying:1 around:2 considered:13 proximity:1 lawrence:1 predict:1 major:2 smallest:1 bag:57 label:11 largest:1 concurrent:1 vice:1 tf:1 vinodchandran:1 tool:1 bqk:2 minimization:1 brought:1 clearly:2 rather:2 pn:2 ej:4 zhou:2 varying:1 mil:20 crawled:1 derived:3 focus:1 joachim:4 improvement:1 indicates:1 motif:1 stopping:2 eliminate:1 typically:1 hidden:1 koller:1 transformed:1 tao:1 arg:2 classification:8 among:1 dual:6 denoted:1 html:1 plan:1 art:1 constrained:2 special:3 initialize:1 equal:4 construct:1 represents:2 kw:1 look:1 icml:7 yu:3 future:1 np:23 t2:1 report:2 richard:1 employ:3 sincere:1 randomly:1 simultaneously:1 densely:1 individual:3 usc:1 maxj:6 cns:1 detection:1 huge:1 mining:1 possibility:1 perez:1 activated:1 hg:7 bundle:2 predefined:1 edge:6 minw:1 wts:6 bq:5 tree:1 incomplete:1 taylor:1 conduct:2 penalizes:2 theoretical:1 minimal:4 increased:1 column:2 soft:4 instance:56 modeling:4 w911nf:1 formulates:1 goodness:1 subset:1 rare:1 conducted:1 reported:4 dependency:4 thanks:1 explores:1 international:1 off:2 picking:1 w1:1 huang:1 li:1 potential:3 dataset7:1 student:4 caused:1 explicitly:2 later:1 h1:1 lot:1 view:1 analyze:2 linked:2 kwk:1 start:1 option:1 parallel:1 carlos:1 ni:2 accuracy:9 qk:2 characteristic:1 efficiently:3 identify:3 vp:3 identification:6 none:1 classified:2 definition:1 frequency:2 involved:1 naturally:2 proof:4 associated:1 sampled:1 stop:2 proved:4 dataset:8 popular:1 knowledge:1 attained:1 supervised:2 follow:1 methodology:2 improved:1 formulation:13 box:1 bqu:5 generality:3 furthermore:3 just:1 smola:3 until:1 sketch:1 replacing:1 christopher:1 lack:1 porter:2 believe:1 name:1 dietterich:1 effect:1 rahmani:1 true:4 normalized:1 ccf:1 lozano:1 regularization:4 assigned:1 iteratively:3 deal:3 during:2 please:1 auc:3 yorktown:1 bpj:2 criterion:2 generalized:1 outline:1 demonstrate:3 performs:1 passage:2 image:2 consideration:1 novel:3 recently:1 organism:1 kwk2:4 refer:2 significant:1 versa:1 cambridge:3 smoothness:1 grid:1 kelley:1 similarity:2 pu:2 imbalanced:1 recent:1 scenario:5 watson:1 yi:10 guestrin:1 additional:6 relaxed:1 employed:4 novelty:2 converge:2 ii:2 semi:1 multiple:17 stephen:1 smooth:1 faster:1 adapt:2 cross:5 retrieval:2 devised:1 cccp:15 qi:1 prediction:2 scalable:1 variant:1 basic:1 cmu:1 iteration:17 kernel:4 represent:1 separately:4 jian:2 source:1 posse:1 tend:1 undirected:1 effectiveness:1 structural:4 counting:1 exceed:1 identically:2 fit:1 identified:1 svn:1 angeles:1 six:1 defense:1 speaking:1 useful:4 generally:1 detailed:1 clear:3 svms:3 category:3 http:3 outperform:1 exist:1 lsi:1 canonical:1 r12:3 nsf:2 estimated:3 correctly:2 diverse:1 misvm:22 shall:1 express:1 group:1 deleted:1 rectangle:1 krzysztof:1 graph:18 run:2 inverse:1 almost:1 announced:1 bound:2 ct:3 guaranteed:1 fold:17 quadratic:2 adapted:3 constraint:35 idf:1 vishwanathan:2 bp:2 min:13 extremely:2 structured:7 department:3 combination:3 manning:1 jr:1 across:2 terminates:1 qu:2 making:1 vq:3 turn:2 slack:1 end:1 serf:1 available:2 appropriate:1 kowalczyk:1 appending:1 alternative:1 original:1 thomas:1 denotes:2 remaining:5 running:3 hinge:3 calculating:1 bini:1 especially:2 prof:1 classical:1 comparatively:1 objective:6 added:8 already:2 strategy:1 primary:4 traditional:1 southern:1 evolutionary:1 link:9 mapped:1 capacity:2 outer:1 w0:1 nondifferentiable:1 topic:1 partner:7 considers:1 reason:1 modeled:1 relationship:2 biu:6 ratio:16 useless:1 equivalently:1 difficult:1 potentially:1 statement:1 subproblems:1 negative:5 design:1 upper:2 datasets:2 markov:2 purdue:3 benchmark:1 t:21 maxk:3 relational:8 incorporated:1 incorporate:1 pnq:1 introduced:1 david:1 pair:2 required:1 specified:3 optimized:1 california:1 nip:2 address:1 usually:1 scott:1 appeared:1 hyperlink:10 summarize:1 program:3 t20:1 max:8 built:1 including:1 bi2:1 business:2 treated:4 hr:2 advanced:1 zhu:1 scheme:1 improve:1 technology:1 axis:1 finley:1 extract:1 xq:2 text:1 nice:1 interdependent:1 yanliu:1 acknowledgement:1 pproblem:1 embedded:1 loss:7 suggestion:1 limitation:1 localized:1 validation:5 h2:1 unl:1 degree:1 xp:2 ibm:2 row:2 course:4 repeat:1 supported:1 enjoys:1 bias:1 stemmer:2 sparse:1 superfamily:1 distributed:2 benefit:1 curve:1 calculated:1 world:1 rich:1 computes:1 author:4 employing:1 approximate:1 cutting:26 dealing:1 global:1 incoming:1 conservation:2 h1s:3 subsequence:2 search:1 iterative:2 latent:5 why:1 table:9 nature:1 learn:1 terminate:2 ca:1 ignoring:1 expansion:1 meanwhile:1 aistats:1 pk:3 main:1 big:3 whole:2 motivation:2 fig:4 west:2 je:1 depicts:2 ny:1 lc:13 precision:12 sub:1 exponential:1 lie:2 jmlr:1 bij:5 tang:1 theorem:6 specific:1 jensen:1 maxi:2 r2:7 svm:5 chun:1 exists:2 incorporating:1 false:8 ci:8 sparseness:1 demand:1 rui:1 margin:2 mf:13 cx:2 likely:2 absorbed:1 expressed:1 partially:2 collectively:1 hua:1 corresponds:1 satisfies:1 adaption:1 extracted:2 ma:1 trx:5 goal:1 formulated:1 content:5 feasible:5 specifically:2 determined:1 wt:17 justify:1 csoi:1 total:2 specie:1 lathrop:1 experimental:3 support:3 partnership:1 confines:1 violated:5 constructive:1 outgoing:2 |
3,848 | 4,484 | Structured Learning for Cell Tracking
Xinghua Lou, Fred A. Hamprecht
Heidelberg Collaboratory for Image Processing (HCI)
Interdisciplinary Center for Scientific Computing (IWR)
University of Heidelberg, Heidelberg 69115, Germany
{xinghua.lou,fred.hamprecht}@iwr.uni-heidelberg.de
Abstract
We study the problem of learning to track a large quantity of homogeneous objects
such as cell tracking in cell culture study and developmental biology. Reliable
cell tracking in time-lapse microscopic image sequences is important for modern
biomedical research. Existing cell tracking methods are usually kept simple and
use only a small number of features to allow for manual parameter tweaking or
grid search. We propose a structured learning approach that allows to learn optimum parameters automatically from a training set. This allows for the use of a
richer set of features which in turn affords improved tracking compared to recently
reported methods on two public benchmark sequences.
1
Introduction
One distinguishing property of life is its temporal dynamics, and it is hence only natural that time
lapse experiments play a crucial role in current research on signaling pathways, drug discovery and
developmental biology [17]. Such experiments yield a very large number of images, and reliable
automated cell tracking emerges naturally as a prerequisite for further quantitative analysis.
Even today, cell tracking remains a challenging problem in dense populations, in the presence of
complex behavior or when image quality is poor. Existing cell tracking methods can broadly be
categorized as deformable models, stochastic filtering and object association. Deformable models
combine detection, segmentation and tracking by initializing a set of models (e.g. active contours) in
the first frame and updating them in subsequent frames (e.g. [17, 8]). Large displacements are difficult to capture with this class of techniques and are better handled by state space models, e.g. in the
guise of stochastic filtering. The latter also allows for sophisticated observation models (e.g. [20]).
Stochastic filtering builds on a solid statistical foundation, but is often limited in practice due to its
high computational demands. Object association methods approximate and simplify the problem by
separating the detection and association steps: once object candidates have been detected and characterized, a second step suggests associations between object candidates at different frames. This
class of methods scales well [21, 16, 13] and allows the tracking of thousands of cells in 3D [19].
All of the above approaches contain energy terms whose parameters may be tedious or difficult
to adjust. Recently, great efforts have been made to produce better energy terms with helps of
machine learning techniques. This was first accomplished by casting tracking as a local affinity
prediction problem such as binary classification with either offline [1] or online learning [11, 5, 15],
weakly supervised learning with imperfect oracles [27], manifold appearance model learning [25],
or ranking [10, 18]. However, these local methods fail to capture the very important dependency
among associations, hence the resulting local affinities do not necessarily guarantee a better global
association [26]. To address this limitation, [26] extended the RankBoost method from [18] to rank
global associations represented as a Conditional Random Field (CRF). Regardless of this, it has
two major drawbacks. Firstly, it depends on a set of artificially generated false association samples
that can make the training data particularly imbalanced and the training procedure too expensive
1
for large-scale tracking problems. Secondly, RankBoost desires the ranking feature to be positively
correlated with the final ranking (i.e. the association score) [10]. This in turn requires careful preadjustment of the sign of each feature based on some prior knowledge [18]. Actually, this prior
knowledge may not always be available or reliable in practice.
The contribution of this paper is two-fold. We first present an extended formulation of the object
association models proposed in the literature. This generalization improves the expressiveness of the
model, but also increases the number of parameters. We hence, secondly, propose to use structured
learning to automatically learn optimum parameters from a training set, and hence profit fully from
this richer description. Our method addresses the limitations of aforementioned learning approaches
in a principled way.
The rest of the paper is organized as follows. In section 2, we present the extended object association
models and a structured learning approach for global affinity learning. In section 3, an evaluation
shows that our framework inherits the runtime advantage of object association while addressing
many of its limitations. Finally, section 4 states our conclusions and discusses future work.
2
Structured Learning for Cell Tracking
2.1
Association Hypotheses and Scoring
We assume that a previous detection and segmentation step has identified object candidates in all
frames, see Fig. 1. We set out to find that set of object associations that best explains these observations. To this end, we admit the following set E of standard events [21, 13]: a cell can move
or divide and it can appear or disappear. In addition, we allow two cells to (seemingly) merge, to
account for occlusion or undersegmentation; and a cell can (seemingly) split, to allow for the lifting
of occlusion or oversegmentation. These additional hypotheses are useful to account for the errors
that typically occur in the detection and segmentation step in crowded or noisy data. The distinction
between division and split is reasonable given that typical fluorescence stains endow the anaphase
with a distinctive appearance.
Frame t+1
c1?
c1
c2?
c2
c3
C ? {c1 , c2 , c3}
c3?
c4?
c5?
Hypotheses
Input Frame Pair
Frame t
c?
e
c
c1
c1
c1
c2
moves to
moves to
divides to
moves to
divides to
c2
c3
c
C ? ? {c1? , c?2 , c3? , c?4 , c5? } 3
c3
splits to
moves to
moves to
?
?
c1?
c2?
c1?
c1?
c2?
c4?
c?2
c3?
c5?
c4?
c5?
?
z
Features
f
f
f
f
move
c1 ,c1?
move
c1 ,c2?
divide
c1 ,{c1? ,c?2 }
move
c2 ,c1?
f
divide
c2 ,{c?2 ,c3? }
f
f
f
split
c3 ,{c?4 ,c5? }
move
c3 ,c?4
move
c3 ,c5?
?
Value
z
z
z
z
move
c1 ,c1?
move
c1 ,c?2
divide
c1 ,{c1? ,c?2 }
move
c2 ,c1?
1
0
0
z
divide
c2 ,{c?2 ,c3? }
1
z
z
z
split
c3 ,{c?4 ,c5? }
move
c3 ,c?4
move
c3 ,c5?
?
0
1
0
0
?
Figure 1: Toy example: two sets of object candidates, and a small subset of the possible association hypotheses. One particular interpretation of the scene is indicated by colored arrows (left) or
equivalently by a configuration of binary indicator variables z (rightmost column in table).
Given a pair of object candidate lists x = {C, C 0 } in two neighboring frames, there is a multitude
of possible association hypotheses, see Fig. 1. We have two tasks: firstly, to allow only consistent
associations (e.g. making sure that each cell in the second frame is accounted for only once); and
secondly to identify, among the multitude of consistent hypotheses, the one that is most compatible
with the observations, and with what we have learned from the training data.
We express this compatibility
between c ? P(C) and c0 ? P(C 0 ) by event e ? E
e
of the association
e
e
as an inner product fc,c0 w . Here, fc,c0 is a feature vector that characterizes the discrepancy (if
any) between object candidates c and c0 ; and we is a parameter vector that encodes everything we
2
have learned from the training data. Summing over all object candidates in either of the frames and
over all types of events gives the following compatibility function:
X X
X
e e
e
(1)
L(x, z; w) =
hfc,c
0 , w izc,c0
e?E c?P(C) c0 ?P(C 0 )
s. t.
X X
e
zc,c
0 = 1 and
X
X
e
e
zc,c
0 = 1 with zc,c0 ? {0, 1}
(2)
e?E c0 ?P(C 0 )
e?E c?P(C)
The constraints in the last line involve binary indicator variables z that reflect the consistency requirements: each candidate in the first frame must have a single fate, and each candidate from the
second frame a unique history. As an important technical detail, note that P(C) := C ? (C ? C)
is a set comprising each object candidate, as well as all ordered pairs of object candidates from
a frame1 . This allows us to conveniently subsume cell divisions, splits and mergers in the above
equation. Overall, the compatibility function L(x, z; w), i.e. the global affinity measure, states how
well a set of associations z matches the observations f (x) computed from the raw data x, given the
knowledge w from the training set.
The remaining tasks, discussed next, are how to learn the parameters w from the training data
(section 2.2); given these, how to find the best possible associations z (section 2.3); and finding
useful features (section 2.4).
2.2
Structured Max-Margin Parameter Learning
In learning the parameters automatically from a training set, we pursue two goals: first, to go beyond
manual parameter tweaking in obtaining the best possible performance; and second, to make the
process as facile as possible for the user. This is under the assumption that most experimentalists
find it easier to specify what a correct tracking should look like, rather than what value a more-or-less
obscure parameter should have.
Given N training frame pairs X = {xn } and their correct associations Z ? = {zn? }, n = 1, . . . , N ,
the best set of parameters is the optimizer of
arg min R(w; X, Z ? ) + ??(w)
(3)
w
Here, R(w; X, Z ? ) measures the empirical loss of the current parametrization w given the training data X, Z ? . To prevent overfitting to the training data, this is complemented by the regularizer ?(w) that favors parsimonious models. We use L1 or L2 regularization (?(w) =
||w||pp /p, p = {1, 2}), i.e. a measure of the length of the parameter vector w. The latter is often used for its numerical efficiency, while the former is popular thanks to its potential to induce sparse solutions (i.e., some parameters can become zero). The empirical loss is given by
PN
? is a loss function that measures the
R(w; X, Z ? ) = N1 i=1 ?(zn? , z?n (w; xn )). Here ?(z ? , z)
discrepancy between a true association z ? and a prediction by specifying the fraction of missed
events w.r.t. the ground truth:
X
1 X X
?e
e
? = ?
?(z ? , z)
zc,c
?c,c
(4)
0 (1 ? z
0 ).
|z |
0
0
e?E c?P(C) c ?P(C )
This decomposable function allows for exact inference when solving Eq. 5 [6].
Importantly, both the input (objects from a frame pair) and output (associations between objects)
in this learning problem are structured. We hence resort to max-margin structured learning [2] to
exploit the structure and dependency within the association hypotheses. In comparison to other
aforementioned learning methods, structured learning allows us to directly learn the global affinity
measure, avoid generating many artificial false association samples, and drop any assumptions on
the signs of the features. Structured learning has been successfully applied to many complex real
world problems such as word/sequence alignment [22, 24], graph matching [6], static analysis of
binary executables [14] and segmentation [3].
In particular, we attempt to find the decision boundary that maximizes the margin between the
correct association zn? and the closest runner-up solution. An equivalent formulation is the condition
1
For the example in Fig. 1, P(C) = {c1 , c2 , c3 , {c1 , c2 }, {c1 , c3 }, {c2 , c3 }}.
3
that the score of zn? be greater than that of any other solution. To allow for regularization, one can
relax this constraint by introducing slack variables ?n , which finally yields the following objective
function for the max-margin structured learning problem from Eq. 3:
PN
1
arg min
n=1 ?n + ??(w)
N
w,??0
(5)
s. t.
?n, ?z?n ? Zn : L(xn , zn? ; w) ? L(xn , z?n ; w) ? ?(zn? , z?n ) ? ?n ,
where Zn is the set of possible consistent associations and ?(zn? , z?n ) ? ?n is known as ?marginrescaling? [24]. Intuitively, it pushes the decision boundary further away from the ?bad? solutions
with high losses.
2.3
Inference and Implementation
Since Eq. 5 involves an exponential number of constraints, the learning problem cannot be represented explicitly, let alone solved directly. We thus resort to the bundle method [23] which, in
turn, is based on the cutting-planes approach [24]. The basic idea is as follows: Start with some
parametrization w and no constraints. Iteratively find, first, the optimum associations for the current
w by solving, for all n, z?n = arg maxz {L(xn , z; w) + ?(zn? , z)}. Use all these z?n to identify the
most violated constraint, and add it to Eq. 5. Update w by solving Eq. 5 (with added constraints),
then find new best associations, etc. pp. For a given parametrization, the optimum associations can
be found by integer linear programming (ILP) [16, 21, 13].
Our framework has been implemented in Matlab and C++, including a labeling GUI for the generation of training set associations, feature extraction, model inference and the bundle method. To
reduce the search space and eliminate hypotheses with no prospect of being realized, we constrain
the hypotheses to a k-nearest neighborhood with distance thresholding. We use IBM CPLEX2 as
the underlying optimization platform for the ILP, quadratic programming and linear programming
as needed for solving Eq. 5 [23].
2.4
Features
To differentiate similar events (e.g. division and split) and resolve ambiguity in model inference, we
need rich features to characterize different events. In additional to basic features such as size/position
[21] and intensity histogram [16], we also designed new features such as ?shape compactness? for
oversegmentation and ?angle pattern? for division. Shape compactness relates the summed areas
of two object candidates to the area of their union?s convex hull. Angle pattern describes the constellation of two daughter cells relative to their mother. Features can be defined on a pair of object
candidates or on an individual object candidate only. Our features are categorized in Table 1. Note
that the same feature can be used for different events.
Position
Intensity
Shape
Others
3
Table 1: Categorization of features.
Feature Description
difference in position, distance to border, overlap with border;
difference in intensity histogram/sum/mean/deviation, intensity of father cell;
difference in shape, difference in size, shape compactness, shape evenness;
division angle pattern, mass evenness, eccentricity of father cell.
Results
We evaluated the proposed method on two publicly available image sequences provided in conjunction with the DCellIQ project3 [16] and the Mitocheck project4 [12]. The two datasets show a certain
degree of variations such as illumination, cell density and image compression artifacts (Fig. 2). The
2
http://www-01.ibm.com/software/integration/optimization/cplex-optimizer/
http://www.cbi-tmhs.org/Dcelliq/files/051606 HeLaMCF10A DMSO 1.rar
4
http://www.mitocheck.org/cgi-bin/mtc?action=show movie;query=243867
3
4
GFP stained cell nuclei were segmented using the method in [19], yielding an F-measure over 99.3%
by counting. Full ground truth associations for training and evaluation were generated with a Matlab GUI tool at a rate of approximately 20 frames/hour. Some statistics about these two datasets are
shown in Table 2.
Name
DCellIQ
Mitocheck
Table 2: Some statistics about the datasets in our evaluation.
Image Size
No. of Frames No. of Cells Segm. F-Measure
512 ? 672
100
10664
99.5%
1024 ? 1344
94
24096
99.3%
T=25
T=50
T=75
T=25
T=50
T=75
Compressed
No
Yes
Figure 2: Selected raw images from the DCellIQ sequence (top) and the Mitocheck sequence (bottom). The Mitocheck sequence exhibits higher cell density, larger intensity variability and ?blockness? artifacts due to image compression.
Task 1: Efficient Tracking for a Given Sequence
We first evaluate our method on a task that is frequently encountered in practice: the user simply
wishes to obtain a good tracking for a given sequence with the smallest possible effort. For a fair
comparison, we extended Padfield?s method [21] to account for the six events described in section
2.1 and used the same features (viz., size and position) and weights as in [21]. Hand-tuning of the
parameters results in a high accuracy of 98.4% (i.e. 1 - total loss) as shown in Table 3 (2nd row).
A detailed analysis of the error counts for specific events shows that the method accounts well for
moves, but has difficulty with disappearance and split events. This is mainly due to the limited
descriptive power of the simple features used. To study the difference between manual tweaking
and learning of the parameters, we used the learning framework presented here to optimize the
model and obtained a reduction of the total loss from 1.64% to 0.65% (3rd row). This can be
considered as the limit of this model. Note that the learned parametrization actually deteriorates the
detection of divisions because the learning aims at minimizing the overall loss across all events. In
obtaining these results, one third of the entire sequence was used for training, just as in all subsequent
comparisons.
With 37 features included and their weights optimized using structured learning, our model fully
profits from this richer description and achieves a total loss of only 0.30% (4th row) which is a
significant improvement over [21, 16] (2nd/7th row) and manual tweaking (6th row). Though a
certain amount of efforts is needed for creating the training set, our method allows experimentalists
to contribute their expertise in an intuitive fashion. Some example associations are shown in Fig. 3.
The learned parameters are summarized in Fig. 4 (top). They afford the following observations:
Firstly, features on cell size and shape are generally of high importance, which is in line with the
assumption in [21]. Secondly, the correlations of the features with the final association score are
5
Table 3: Performance comparison on the DCellIQ dataset. The header row shows the number of
events occurring for moves, divisions, appearance, disappearance, splits and mergers. The remaining
entries give the error counts for each event, summed over the entire sequence.
mov
div
app
dis
spl
mer
total loss
10156 104
78
76
54
55
Padfield et al. [21]
71
18
16
26
30
12
1.64%
Padfield et al. w/ learning
21
25
5
5
6
10
0.65%
Ours w/ learning (L2 regula.)
15
6
4
1
2
6
0.30%
Ours w/ learning (L1 regula.)
22
6
9
3
4
9
0.45%
Ours w/ manual tweaking
56
24
16
19
2
5
1.12%
Li et al. [16]
6.18%a
Local learning by Random Forest
18
14
2
0
12
13
0.55%
a
Here we use the best reported error matching rate in [16] (similar to our loss).
Figure 3: Some diverging associations by [21] (top) and our method (bottom). Color code: yellow
? move; red ? division; green ? split; cyan ? merger.
automatically learned. For example, shape compactness is positively correlated with split but negatively with division. This is in line with the intuition that an oversegmentation conserves compact
shape, while a true division seemingly pushes the daughters far away from each other (in the present
kind of data, where only DNA is labeled). Finally, in spite of the regularization, many features are
associated with large parameter values, which is key to the improved expressive power.
Task 2: Tracking for High-Throughput Experiments
The experiment described in the foregoing draws both training and test samples from the same time
lapse experiment. However, in high-throughput experiments such as in the Mitocheck project [12],
it is more desirable to train on one or a few sequences, and make predictions on many others. To
emulate this situation, we have used the parameters w trained in the foregoing on the DCellIQ
sequence [16] and used these to estimate the tracking of the Mitocheck dataset. The main focus of
the Mitocheck project is on accurate detection of mitosis (cell division). Despite the difference in
illumination and cell density from the training data, and despite the segmentation artifacts caused
by the compression of the image sequence, our method shows a high generalization capability and
obtains a total loss of 0.78%. In particular, we extract 93.2% of 384 mitosis events which is a
significant improvement over the mitosis detection rate reported in [12] (81.5%, 294 events).
Comparison to Local Affinity Learning
We also developed a local affinity learning approach that is in spirit of [1, 15]. Rather than using
AdaBoost [9], we chose Random Forest (RF) [4] which provides fairly comparable classification
power [7]. We sample positive associations from the ground truth and randomly generate false
associations. RF classifiers are built for each event independently. The predicted probabilities by
the RF classifiers are used to compute the overall association score as in Eq. 6 (with the same
constraints in Eq. 2). Since we have multiple competing events (one cell can only have a single
6
mov
div
app
dis
spl
mer
Feature Importance (L2)
Importance
0.04
0.02
0
?0.02
?0.04
?0.06
0
.
dipos
ff
f i
d
di. i if f. ti
f. s on
n
f
d
t
di if f. e
i
ff f. i n. sh ze
. i nt h ap
di in nt en is e
ff te en . t.
. n. . su
fa fa an in d me m
th th gl te ev an
er er e n. ia
.
p
e i a s
sh s cc nt tt um
ap iz en en er
e e tr si n
ov
er m co ev ic ty
di la as mp en it
st p s ac ne y
an wi ev tn ss
ce th en es
n s
di
to bo es
f
d
di if f. di b rd s
f o e
f
ovff. . in f. rd r
er i in te s er
di la nt te n. iz
st p en n. s e
an wi . m um
ce th de ea
v n
di
to bo ia
f
didif f. di b rd .
ff f. i ff or er
. i nt . de
i nt en si r
dinte en . ze
ff n. . su
. d me m
e a
di
dipos vi n
shff. di ff it a.
io
.
ap i ff
e nt . si n
z
s
e
m com n. ha e
diass pa m pe
ff e ct ea
. ve ne n
n s
di
dipos ne s
f
sh f. di ff it ss
ap i ff . io
e nt . si n
e s z
macom n. ha e
ss pa m pe
e
evctn an
en es
ne s
ss
?0.1
di
di
ff
Importance
Feature Importance (L1)
0.1
Figure 4: Parameters w learned from the training data with L2 (top) or L1 (bottom) regularization.
Parameters weighing the features for different events are colored differently. Both parameter vectors
are normalized to unit 1-norm, i.e. kwk1 = 1.
Table 4: Performance comparison on the Mitocheck dataset. The method was trained on the DCellIQ
dataset. The header row shows the number of events occurring for moves, divisions, appearance, disappearance, splits and mergers. The remaining entries give the error counts for each event, summed
over the entire sequence.
mov
div
app
dis
spl
mer
total loss
22520 384
310
304 127
132
Padfield et al. w/ learning
171
85
58
47
53
13
1.39%
Ours w/ learning (L2 regula.)
98
26
31
25
43
9
0.78%
Ours w/ learning (L1 regula.)
93
35
54
25
26
48
0.98%
Local learning by Random Forest
214
281
162
10
82
68
2.33%
fate), we also introduce weights {?e } to capture the dependencies between events. These weights
are optimized via a grid search on the training data.
L(x, z; w) =
X X
X
e
e
?e Prob(fc,c
0 )zc,c0
(6)
e?E c?P(C) c0 ?P(C 0 )
The results are shown in Table 3 (8th row) and Table 4 (5th row), which afford the following observations. Firstly, a locally strong affinity prediction does not necessarily guarantee a better global
association. Secondly, local learning shows particularly weak generalization capability.
Sensitivity to Training Set
The success of supervised learning depends on the representativeness (and hence also size) of the
training set. To test the sensitivity of the results to the training data used, we drew different numbers
of training image pairs randomly from the entire sequence and used the remaining pairs for testing.
For each training set size, this experiment is repeated 10 times. The mean and deviation of the losses
on the respective test sets is shown in Fig. 5. According to the one-standard-error-rule, associations
between at least 15 or 20 image pairs are desirable, which can be accomplished in well below an
hour of annotation work.
7
L1 vs. L2 Regularization
1.6
Sensitivity to Training Data Size
1.4
1.2
1
0.8
0.6
0.4
0.2
0
10
20
30
40
Approximation gap ? (? 10?3)
Average test loss (? 10?2)
The results of L1 vs. L2 regularization are comparable (see Table 3 and Table 4). While L1 regularization yields sparser feature selection 4 (bottom), it has a much slower convergence rate (Fig. 6).
The staircase structure shows that, due to sparse feature selection, the bundle method has to find
more constraints to escape from a local minimum.
30
Convergence Rate (L1 vs. L2)
25
20
L1 Regularization
L2 Regularization
15
10
5
0
10
20
30
40
50
60
70
Number of constraints
Number of frame pairs for training
Figure 5: Learning curve of structured learning Figure 6: Convergence rates of structured learn(with L2 regularization).
ing (L1 vs. L2 regularization).
4
Conclusion & Future Work
We present a new cell tracking scheme that uses more expressive features and comes with a structured learning framework to train the larger number of parameters involved. Comparison to related
methods shows that this learning scheme brings significant improvements in performance and, in
our opinion, usability.
We currently work on further improvement of the tracking by considering more than two frames at
a time, and on an active learning scheme that should reduce the amount of required training inputs.
Acknowledgement
We are very grateful for partial financial support by CellNetworks Cluster (EXC81), FORSYSViroQuant (0313923), SBCancer, DFG (GRK 1653) and ?Enable fund? of University of Heidelberg. We also thank Bjoern Andres, Jing Yuan and Christoph Straehle for their comments on the
manuscript.
References
[1] S. Avidan. Ensemble Tracking. In CVPR, 2005.
[2] G. Bakir, T. Hofmann, B. Schoelkopf, A. J. Smola, B. Taskar, and S. Vishwanathan. Predicting
Structured Data. MIT Press, Cambridge, MA, 2006.
[3] L. Bertelli, T. Yu, D. Vu, and B. Gokturk. Kernelized Structural SVM Learning for Supervised
Object Segmentation. In CVPR, 2011.
[4] L. Breiman. Random Forests. Mach Learn, 45(1):5?32, 2001.
[5] M. D. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier, and L. V. Gool. Robust Trackingby-Detection using a Detector Confidence Particle Filter. In ICCV, 2009.
[6] T. S. Caetano, J. J. McAuley, L. Cheng, Q. V. Le, and A. J. Smola. Learning Graph Matching.
IEEE T Pattern Anal, 31(6):1048?1058, 2009.
[7] R. Caruana and A. Niculescu-Mizil. An Empirical Comparison of Supervised Learning Algorithms. In ICML, pages 161?168, 2006.
8
[8] O. Dzyubachyk, W. A. van Cappellen, J. Essers, et al. Advanced Level-Set-Based Cell Tracking in Time-Lapse Fluorescence Microscopy. IEEE T Med Imag, 29(3):852, 2010.
[9] Y. Freund. An adaptive version of the boost by majority algorithm. Mach Learn, 43(3):293?
318, 2001.
[10] Y. Freund, R. Iyer, R. E. Schapire, , and Y. Singer. An Efficient Boosting Algorithm for
Combining Preferences. J Mach Learn Res, 4:933?969, 2003.
[11] H. Grabner and H. Bischof. On-line Boosting and Vision. In CVPR, 2006.
[12] M. Held, M. H. A. Schmitz, et al. CellCognition: time-resolved phenotype annotation in highthroughput live cell imaging. Nature Methods, 7(9):747?754, 2010.
[13] T. Kanade, Z. Yin, R. Bise, S. Huh, S. E. Eom, M. Sandbothe, and M. Chen. Cell Image
Analysis: Algorithms, System and Applications. In WACV, 2011.
[14] N. Karampatziakis. Static Analysis of Binary Executables Using Structural SVMs. In NIPS,
2010.
[15] C.-H. Kuo, C. Huang, , and R. Nevatia. Multi-Target Tracking by On-Line Learned Discriminative Appearance Models. In CVPR, 2010.
[16] F. Li, X. Zhou, J. Ma, and S. Wong. Multiple Nuclei Tracking Using Integer Programming for
Quantitative Cancer Cell Cycle Analysis. IEEE T Med Imag, 29(1):96, 2010.
[17] K. Li, E. D. Miller, M. Chen, et al. Cell population tracking and lineage construction with
spatiotemporal context. Med Image Anal, 12(5):546?566, 2008.
[18] Y. Li, C. Huang, and R. Nevatia. Learning to Associate: HybridBoosted Multi-Target Tracker
for Crowded Scene. CVPR, 2009.
[19] X. Lou, F. O. Kaster, M. S. Lindner, et al. DELTR: Digital Embryo Lineage Tree Reconstructor.
In ISBI, 2011.
[20] E. Meijering, O. Dzyubachyk, I. Smal, and W. A. van Cappellen. Tracking in cell and developmental biology. Semin Cell Dev Biol, 20(8):894 ? 902, 2009.
[21] D. Padfield, J. Rittscher, and B. Roysam. Coupled Minimum-Cost Flow Cell Tracking for
High-Throughput Quantitative Analysis. Med Image Anal, 2010.
[22] B. Taskar, S. Lacoste-Julien, and M. I. Jordan. Structured Prediction, Dual Extragradient and
Bregman Projections. J Mach Learn Res, 7:1627?1653, 2006.
[23] C. H. Teo, S. V. N. Vishwanthan, A. J. Smola, and Q. V. Le. Bundle methods for regularized
risk minimization. J Mach Learn Res, 11:311?365, 2010.
[24] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large Margin Methods for Structured and Interdependent Output Variables. J Mach Learn Res, 6(2):1453, 2006.
[25] X. Wang, G. Hua, and T. X. Han. Discriminative Tracking by Metric Learning. In ECCV,
2010.
[26] B. Yang, C. Huang, and R. Nevatia. Learning Affinities and Dependencies for Multi-Target
Tracking using a CRF Model. In CVPR, 2011.
[27] B. Zhong, H. Yao, S. Chen, et al. Visual Tracking via Weakly Supervised Learning from
Multiple Imperfect Oracles. In CVPR, 2010.
9
| 4484 |@word version:1 compression:3 norm:1 nd:2 c0:10 tedious:1 grk:1 profit:2 tr:1 solid:1 mcauley:1 reduction:1 configuration:1 score:4 ours:5 rightmost:1 existing:2 current:3 com:2 nt:8 si:4 must:1 subsequent:2 numerical:1 shape:9 hofmann:2 drop:1 designed:1 update:1 fund:1 v:4 alone:1 selected:1 weighing:1 merger:4 plane:1 parametrization:4 colored:2 provides:1 boosting:2 contribute:1 preference:1 firstly:4 org:2 c2:15 become:1 yuan:1 hci:1 pathway:1 combine:1 introduce:1 behavior:1 frequently:1 multi:3 automatically:4 resolve:1 considering:1 provided:1 project:2 underlying:1 maximizes:1 mass:1 what:3 kind:1 pursue:1 developed:1 finding:1 guarantee:2 temporal:1 quantitative:3 ti:1 runtime:1 um:2 classifier:2 unit:1 imag:2 appear:1 positive:1 local:9 limit:1 io:2 despite:2 mach:6 merge:1 approximately:1 ap:4 chose:1 suggests:1 challenging:1 specifying:1 co:1 christoph:1 limited:2 mer:3 unique:1 testing:1 vu:1 practice:3 union:1 signaling:1 procedure:1 displacement:1 area:2 empirical:3 drug:1 matching:3 projection:1 word:1 induce:1 tweaking:5 confidence:1 spite:1 altun:1 cannot:1 selection:2 tsochantaridis:1 context:1 live:1 risk:1 wong:1 www:3 equivalent:1 maxz:1 optimize:1 center:1 go:1 regardless:1 independently:1 convex:1 decomposable:1 lineage:2 rule:1 importantly:1 financial:1 population:2 variation:1 construction:1 target:3 play:1 today:1 user:2 exact:1 homogeneous:1 distinguishing:1 programming:4 hypothesis:9 us:1 stain:1 pa:2 associate:1 expensive:1 particularly:2 updating:1 conserve:1 ze:2 labeled:1 bottom:4 role:1 taskar:2 initializing:1 capture:3 solved:1 thousand:1 wang:1 schoelkopf:1 caetano:1 cycle:1 xinghua:2 prospect:1 principled:1 intuition:1 developmental:3 dynamic:1 trained:2 weakly:2 solving:4 ov:1 grateful:1 distinctive:1 division:12 efficiency:1 negatively:1 resolved:1 differently:1 represented:2 emulate:1 regularizer:1 train:2 detected:1 artificial:1 labeling:1 query:1 neighborhood:1 header:2 whose:1 richer:3 larger:2 foregoing:2 cvpr:7 relax:1 s:4 compressed:1 favor:1 undersegmentation:1 statistic:2 noisy:1 final:2 online:1 seemingly:3 differentiate:1 sequence:16 advantage:1 descriptive:1 propose:2 product:1 neighboring:1 combining:1 deformable:2 description:3 intuitive:1 convergence:3 cluster:1 optimum:4 requirement:1 eccentricity:1 produce:1 generating:1 categorization:1 jing:1 object:22 help:1 cbi:1 ac:1 nearest:1 eq:8 strong:1 implemented:1 predicted:1 involves:1 come:1 drawback:1 correct:3 filter:1 stochastic:3 hull:1 mtc:1 enable:1 opinion:1 public:1 everything:1 rar:1 explains:1 bin:1 generalization:3 secondly:5 tracker:1 considered:1 ground:3 ic:1 great:1 major:1 optimizer:2 achieves:1 smallest:1 currently:1 fluorescence:2 teo:1 successfully:1 tool:1 schmitz:1 minimization:1 mit:1 always:1 rankboost:2 aim:1 rather:2 collaboratory:1 pn:2 avoid:1 zhong:1 gokturk:1 breiman:1 casting:1 zhou:1 conjunction:1 endow:1 inherits:1 viz:1 focus:1 improvement:4 joachim:1 rank:1 karampatziakis:1 mainly:1 inference:4 niculescu:1 typically:1 eliminate:1 entire:4 compactness:4 kernelized:1 koller:1 comprising:1 germany:1 compatibility:3 arg:3 overall:3 classification:2 among:2 dual:1 aforementioned:2 platform:1 summed:3 integration:1 fairly:1 field:1 once:2 gfp:1 extraction:1 biology:3 look:1 yu:1 throughput:3 icml:1 future:2 discrepancy:2 others:2 simplify:1 escape:1 few:1 modern:1 randomly:2 ve:1 individual:1 dfg:1 occlusion:2 cplex:1 n1:1 gui:2 attempt:1 detection:8 evaluation:3 adjust:1 alignment:1 runner:1 sh:3 yielding:1 hamprecht:2 held:1 bundle:4 accurate:1 bregman:1 partial:1 culture:1 respective:1 tree:1 divide:7 re:4 column:1 dev:1 caruana:1 zn:10 cost:1 introducing:1 addressing:1 subset:1 deviation:2 entry:2 father:2 too:1 characterize:1 reported:3 dependency:4 spatiotemporal:1 thanks:1 density:3 st:2 sensitivity:3 interdisciplinary:1 yao:1 reflect:1 ambiguity:1 huang:3 admit:1 evenness:2 resort:2 creating:1 nevatia:3 toy:1 li:4 account:4 potential:1 de:3 bjoern:1 summarized:1 representativeness:1 crowded:2 explicitly:1 ranking:3 depends:2 caused:1 mp:1 vi:1 reconstructor:1 characterizes:1 red:1 start:1 capability:2 annotation:2 contribution:1 publicly:1 accuracy:1 miller:1 yield:3 identify:2 ensemble:1 yes:1 yellow:1 mitosis:3 weak:1 raw:2 andres:1 expertise:1 cc:1 app:3 history:1 detector:1 manual:5 ty:1 energy:2 pp:2 involved:1 naturally:1 associated:1 di:16 static:2 dataset:4 popular:1 knowledge:3 emerges:1 improves:1 color:1 organized:1 segmentation:6 bakir:1 sophisticated:1 actually:2 regula:4 ea:2 manuscript:1 higher:1 supervised:5 adaboost:1 specify:1 improved:2 formulation:2 evaluated:1 though:1 just:1 biomedical:1 smola:3 correlation:1 hand:1 expressive:2 su:2 brings:1 quality:1 indicated:1 artifact:3 scientific:1 name:1 contain:1 true:2 normalized:1 staircase:1 former:1 hence:6 regularization:11 iteratively:1 lapse:4 crf:2 tt:1 tn:1 l1:11 image:15 recently:2 association:40 interpretation:1 discussed:1 significant:3 cambridge:1 mother:1 tuning:1 rd:4 grid:2 consistency:1 particle:1 han:1 etc:1 add:1 closest:1 imbalanced:1 certain:2 binary:5 success:1 kwk1:1 life:1 accomplished:2 scoring:1 minimum:2 additional:2 greater:1 relates:1 multiple:3 full:1 desirable:2 segmented:1 technical:1 match:1 characterized:1 ing:1 usability:1 huh:1 prediction:5 basic:2 avidan:1 experimentalists:2 vision:1 metric:1 histogram:2 microscopy:1 cell:35 c1:25 addition:1 crucial:1 rest:1 sure:1 file:1 comment:1 med:4 fate:2 spirit:1 flow:1 jordan:1 integer:2 structural:2 presence:1 counting:1 yang:1 split:12 automated:1 identified:1 competing:1 imperfect:2 inner:1 idea:1 reduce:2 six:1 handled:1 vishwanthan:1 effort:3 afford:2 action:1 matlab:2 useful:2 generally:1 detailed:1 involve:1 amount:2 locally:1 svms:1 dna:1 http:3 generate:1 schapire:1 affords:1 sign:2 deteriorates:1 stained:1 track:1 broadly:1 iz:2 express:1 key:1 oversegmentation:3 prevent:1 ce:2 kept:1 lacoste:1 imaging:1 graph:2 fraction:1 sum:1 angle:3 prob:1 reasonable:1 spl:3 missed:1 parsimonious:1 draw:1 decision:2 comparable:2 cyan:1 ct:1 cheng:1 fold:1 quadratic:1 encountered:1 oracle:2 occur:1 constraint:9 vishwanathan:1 constrain:1 scene:2 software:1 encodes:1 min:2 structured:18 according:1 poor:1 describes:1 across:1 wi:2 making:1 intuitively:1 iccv:1 embryo:1 equation:1 remains:1 discus:1 turn:3 fail:1 slack:1 ilp:2 needed:2 count:3 singer:1 end:1 available:2 highthroughput:1 prerequisite:1 leibe:1 away:2 slower:1 top:4 remaining:4 exploit:1 build:1 grabner:1 disappear:1 move:20 objective:1 added:1 quantity:1 realized:1 fa:2 disappearance:3 microscopic:1 affinity:9 exhibit:1 div:3 distance:2 lou:3 thank:1 separating:1 cgi:1 majority:1 me:2 manifold:1 length:1 code:1 minimizing:1 equivalently:1 difficult:2 daughter:2 implementation:1 anal:3 observation:6 datasets:3 benchmark:1 subsume:1 extended:4 variability:1 situation:1 frame:18 expressiveness:1 intensity:5 pair:10 required:1 meier:1 c3:18 optimized:2 bischof:1 c4:3 distinction:1 learned:7 hour:2 boost:1 nip:1 address:2 beyond:1 usually:1 pattern:4 ev:3 below:1 built:1 rf:3 reliable:3 max:3 including:1 green:1 power:3 event:21 overlap:1 natural:1 difficulty:1 ia:2 predicting:1 indicator:2 gool:1 regularized:1 advanced:1 mizil:1 scheme:3 movie:1 eom:1 ne:4 julien:1 executables:2 extract:1 coupled:1 prior:2 literature:1 discovery:1 l2:11 acknowledgement:1 interdependent:1 relative:1 freund:2 fully:2 loss:14 generation:1 limitation:3 filtering:3 wacv:1 isbi:1 digital:1 foundation:1 nucleus:2 degree:1 consistent:3 thresholding:1 obscure:1 ibm:2 eccv:1 row:9 compatible:1 cancer:1 accounted:1 gl:1 last:1 dis:3 offline:1 zc:5 allow:5 sparse:2 van:2 boundary:2 curve:1 xn:5 fred:2 world:1 contour:1 rich:1 made:1 c5:8 adaptive:1 far:1 approximate:1 compact:1 uni:1 cutting:1 obtains:1 global:6 active:2 overfitting:1 summing:1 discriminative:2 search:3 table:12 kanade:1 learn:11 nature:1 robust:1 obtaining:2 forest:4 straehle:1 heidelberg:5 complex:2 necessarily:2 artificially:1 dense:1 main:1 arrow:1 border:2 fair:1 repeated:1 categorized:2 positively:2 fig:8 en:10 ff:12 fashion:1 guise:1 position:4 wish:1 exponential:1 candidate:14 mov:3 pe:2 third:1 bad:1 specific:1 er:7 constellation:1 list:1 svm:1 multitude:2 false:3 importance:5 iwr:2 drew:1 lifting:1 te:4 iyer:1 illumination:2 push:2 occurring:2 demand:1 margin:5 chen:3 easier:1 gap:1 sparser:1 phenotype:1 yin:1 fc:3 simply:1 appearance:5 visual:1 conveniently:1 desire:1 ordered:1 tracking:30 bo:2 hua:1 truth:3 complemented:1 ma:2 conditional:1 goal:1 careful:1 included:1 typical:1 extragradient:1 total:6 kuo:1 e:3 diverging:1 la:2 support:1 latter:2 violated:1 evaluate:1 biol:1 correlated:2 |
3,849 | 4,485 | Action-Gap Phenomenon in Reinforcement Learning
Amir-massoud Farahmand?
School of Computer Science, McGill University
Montreal, Quebec, Canada
Abstract
Many practitioners of reinforcement learning problems have observed that oftentimes the performance of the agent reaches very close to the optimal performance
even though the estimated (action-)value function is still far from the optimal one.
The goal of this paper is to explain and formalize this phenomenon by introducing
the concept of the action-gap regularity. As a typical result, we prove that for an
? the
agent following the greedy
policy ?
? with respect to an action-value function Q,
? ? Q? k1+?
performance loss E V ? (X) ? V ?? (X) is upper bounded by O(kQ
? ),
in which ? ? 0 is the parameter quantifying the action-gap regularity. For ? > 0,
our results indicate smaller performance loss compared to what previous analyses
had suggested. Finally, we show how this regularity affects the performance of
the family of approximate value iteration algorithms.
1
Introduction
This paper introduces a new type of regularity in the reinforcement learning (RL) and planning
problems with finite-action spaces that suggests that the convergence rate of the performance loss to
zero is faster than what previous analyses had indicated. The effect of this regularity, which we call
the action-gap regularity, is that oftentimes the performance of the RL agent reaches very close to
the optimal performance (e.g., it always solves the mountain-car problem with the optimal number
of steps) even though the estimated action-value function is still far from the optimal one.
Figure 1 illustrates the effect of this regularity in a simple problem. We use value iteration to
solve a stochastic 1D chain walk problem (slight modification of the example in Section 9.1 of [1]).
The behavior of the supremum of the difference between the estimate after k iterations and the
optimal action-value function is O(? k ), in which 0 ? ? < 1 is the discount factor (notations shall
be introduced in Section 2). The current theoretical results suggest that the convergence of the
performance loss, which is defined as the average difference between the value of the optimal policy
and the value of the greedy policy w.r.t. (with respect to) the estimated action-value function, should
have the same O(? k ) behavior (cf. Proposition 6.1 of Bertsekas and Tsitsiklis [2]). However, the
behavior of the performance loss is often considerably faster, e.g., it is approximately O(? 1.85k ) in
this example.
To gain a better understanding of the action-gap regularity, focus on a single state and suppose that
there are only two actions available. When the estimated action-value function has a large error, the
greedy policy w.r.t. it can possibly choose the suboptimal action. However, when the error becomes
smaller than the (half of the) gap between the value of the optimal action and the other one, the
selected greedy action is the optimal action. After passing this threshold, the size of the error in
the estimate of the action-value function in that state does not have any effect on the quality of the
selected action. The larger the gap is, the more inaccurate the estimate can be while the selected
greedy action is the optimal one. On the other hand, if the estimated action-value function does not
suggest a correct ordering of actions but the gap is negligibly small, the performance loss of not
?
www.SoloGen.net
1
1
L!!error of the estimated action!value function
10
Performance loss
0
10
O("k)
?1
Error/Loss
10
?2
10
O("1.85k)
?3
10
?4
10
1
10
20
30
40
50
60
k (iteration number)
70
80
90
100
? ? Q? k? and the performance loss
Figure 1: Comparison of the action-value estimation error kQ
?
?
?
?
kV ? V k1 (?
? is the greedy policy with respect to Q) at different iterations of the value iteration
algorithm. The rate of decrease for the performance loss is considerably faster than that of the
estimation error. The problem is a 1D stochastic chain walk with 500 states and ? = 0.95.
choosing the optimal action is small as well. The presence of this gap in the optimal action-value
function is what we call the action-gap regularity of the problem and the described behavior is called
the action-gap phenomenon.
Action-gap regularity is similar to the low-noise (or margin) condition in the classification literature.
The low-noise condition is the assumption that the conditional probability of the class label given
input is ?far? from the critical decision point. If this condition holds, ?fast? convergence rate is
obtainable as was shown by Mammen and Tsybakov [3], Tsybakov [4], Audibert and Tsybakov
[5]. The low-noise condition is believed to be one reason that many high-dimensional classification
problems can be solved with efficient sample complexity (cf. Rinaldo and Wasserman [6]). We
borrow techniques developed in the classification literature, in particular by Audibert and Tsybakov
[5], in our analysis.
It is notable that there have been some works that used classification algorithms to solve reinforcement learning (e.g., Lagoudakis and Parr [7], Lazaric et al. [8]) or the related problem of apprenticeship learning (e.g., Syed and Schapire [9]). Nevertheless, the connection of this work to the
classification literature is only by borrowing theoretical ideas from that literature and not in using
any particular algorithm. The focus of this work is indeed on the value-based approaches, though
one might expect that similar behavior can be observed in classification-based approaches as well.
In the rest of this paper, we formalize the action-gap phenomenon and prove that whenever the MDP
has a favorable action-gap regularity, fast convergence rate is achievable. Theorem 1 upper bounds
the performance loss of the greedy policy w.r.t. the estimated action-value function by a function of
the Lp -norm of the difference between the estimated action-value function and the optimal one. Our
result complements previous theoretical analyses of RL/Planning problems such as those by Antos
et al. [10], Munos and Szepesv?ari [11], Farahmand et al. [12, 13], Maillard et al. [14], who mainly
focused on the quality of the (action-)value function estimate and ignored the action-gap regularity.
This synergy provides a clearer picture of what makes an RL/Planning problem easy or difficult.
Finally as an example of Theorem 1, we address the question of how the errors caused at each
iteration of the Approximate Value Iteration (AVI) algorithm affect the quality of the outcome policy
and show that the AVI procedure benefits from the action-gap regularity of the problem (Theorem 2).
2
Notations
In this section, we provide a brief summary of some of the concepts and definitions from the theory
of MDPs and RL. For more information, the reader is referred to Bertsekas and Tsitsiklis [2], Sutton
and Barto [15], Szepesv?ari [16].
2
For a space ?, with ?-algebra ?? , we define M(?) as the set of all probability measures over ?? .
B(?) denotes the space of bounded measurable functions w.r.t. (with respect to) ?? and B(?, L)
denotes the subset of B(?) with bound 0 < L < ?.
A finite-action discounted MDP is a 5-tuple (X , A, P, R, ?), where X is a measurable state space,
A is a finite set of actions, P : X ?A ? M(X ) is the transition probability kernel, R : X ?A ? R
is the reward distribution, and 0 ? ? < 1 is a discount factor. We denote r(x, a) = E [R(?|x, a)].
A measurable mapping ? : X ? A is called a deterministic Markov stationary policy, or just a policy
in short. An agent?s following a policy ? in an MDP means that at each time step At = ?(Xt ).
A policy ? induces two transition probability kernels P ? : X ? M(X ) and P ? : X ? A ?
M(X ? A). RFor a measurable subset A of X and a measurable
subset B of X ? A, we define
R
(P ? )(A|x) , P (dy|x, ?(x))I{y?A} and (P ? )(B|x, a) , P (dy|x, a)I{(y,?(y))?B} . The m-step
transition probability kernel
(P ? )m : X ?A ? M(X ?A) for m = 2, 3, ? ? ? are inductively defined
R
? m
as (P ) (B|x, a) , X P (dy|x, a)(P ? )m?1 (B|y, ?(y)) (similarly for (P ? )m : X ? M(X )).
Given a transition probability kernelR P : X ? M(X ), define the right-linear operator P ? :
B(X ) ? B(X ) by (P V )(x) , X P (dy|x)V (y). For a probability measure ? ? M(X )
and a measurable
subset A of X , define the left-linear operators ?P : M(X ) ? M(X ) by
R
(?P )(A) = ?(dx)P (dy|x)I{y?A} . A typical choice of P is (P ? )m : M(X ) ? M(X ). These
operators for P : X ? A ? M(X ? A) are defined similarly.
The value function V ? and and the action-value function Q? of a policy ? are defined as follows:
Let (Rt ; t ? 1) be the sequence of rewards when the Markov chain is started from state X1 (stateaction (X1 , A1 ) for the action-value function) drawn from a positive
probability distribution
hP
iover
?
?
t?1
X (X ? A) and the agent follows the policy ?. Then V (x) , E
Rt X1 = x and
t=1 ?
hP
i
?
t?1
Q? (x, a) , E
Rt X1 = x, A1 = a .
t=1 ?
For a discounted MDP, we define the optimal value and optimal action-value functions by V ? (x) =
sup? V ? (x) for all states x ? X and Q? (x, a) = sup? Q? (x, a) for all state-actions (x, a) ? X ?A.
?
We say that a policy ? ? is optimal if it achieves the best values in every state, i.e., if V ? = V ? .
We say that a policy ? is greedy w.r.t. an action-value function Q and write ? = ?
? (?; Q), if ?(x) =
argmaxa?A Q(x, a) holds for all x ? X (if there exist multiple maximizers, a maximizer is chosen
in an arbitrary deterministic manner). Greedy policies are important because a greedy policy w.r.t.
the optimal action-value function Q? is an optimal policy.
For a fixed policy ?, the Bellman operators T ? : B(X ) ? B(X ) and T ? : B(X
R ? A) ? B(X ? A)
(for the action-value functions) are defined as (T ? V )(x) , r(x, ?(x)) + ? X V (y)P (dy|x, ?(x))
R
and (T ? Q)(x, a) , r(x, a) + ? X Q(y, ?(y))P (dy|x, a). The fixed point of the Bellman operator
is the (action-)value function of the policy ?, i.e., T ? Q? = Q? and T ? V ? = V ? . Similarly, the
?
Bellman optimality operators T ? : B(X ) ? B(X ) and T
n : B(X ? RA) ? B(X ? A) (for the
o
action-value functions) are defined as (T ? V )(x) , maxa r(x, a) + ? R?X V (y)P (dr, dy|x, a)
R
and (T ? Q)(x, a) , r(x, a) + ? R?X maxa0 Q(y, a0 )P (dr, dy|x, a). Again, these operators enjoy
a fixed-point property similar to that of the Bellman operators: T ? Q? = Q? and T ? V ? = V ? .
For a probability measure ? ? M(X ), and a measurable function V ? B(X ), we define the Lp (?)R
1/p
norm (1 ? p < ?) of V as kV kp,? , X |V (x)|p d?(x)
. The L? (X )-norm is defined as
kV k? , supx?X |V (x)|. For ? ? M(X ? A) and Q ? B(X ? A), we define kQkp,? (1 ? p < ?)
h
i1/p
P|A|
p
1
by kQkp,? , |A|
and kQk? , sup(x,a)?X ?A |Q(x, a)|.
a=1 kQ(?, a)kp,?
3
Action-Gap Theorem
In this section, we present the action-gap theorem for an MDP (X , A, P, R, ?). To simplify the
analysis, we assume that the number of actions |A| is only 2. We denote ?? ? M(X ) as the station3
Figure 2: The action-gap function gQ? (x) and the relative ordering of the optimal and the estimated
action-value functions for a single state x. Depending on the ordering of the estimates, the greedy
action is the same as (X) or different from (X) the optimal action. This figure does not show all
possible configurations.
ary distribution induced by ? ? , and we let ? ? M(X ) be a user-specified evaluation distribution.
This distribution indicates the relative importance of regions of the state space to the user.
Suppose that algorithm A receives a dataset Dn = {(X1 , A1 , R1 , X10 ), . . . , (Xn , An , Rn , Xn0 )}
?
(with Ri is being drawn from R(?|Xt , At ) and Xt0 is being drawn from P (?|Xt , At )) and outputs Q
?
as an estimate of the optimal action-value function, i.e., Q ? A(Dn ). The exact nature of this algorithm is not important and it can be any online or offline, batch or incremental algorithms of choice
such as Q-learning, SARSA [15], and their variants [17], LSPI [1], LARS-TD [18] in a policy iteration procedure, REG-LSPI [13], various Fitted Q-Iterations algorithms [19, 20, 12], or Linear
? is how well it approximates
Programming-based approaches [21, 22]. The only relevant aspect of Q
?
?
Q . We quantify the quality of the approximation by the Lp -norm kQ ? Q? kp,?? (p ? [1, ?]).
The performance loss (or regret) of a policy ? is the expected difference between the value of the
optimal policy ? ? to the value of ? when the initial state distribution is selected according to ?, i.e.,
Z
Loss(?; ?) ,
(V ? (x) ? V ? (x)) d?(x).
(1)
X
? is the main quantity of interest
The value of Loss(?
? ; ?), in which ?
? is the greedy policy w.r.t. Q,
and indicates how much worse the agent following policy ?
? would perform, in average, compared
to the optimal one. The choice of ? enables the user to specify the relative importance of regions in
the state space.
We define the action(-value)-gap function gQ? : X ? R as
gQ? (x) , |Q? (x, 1) ? Q? (x, 2)| .
This gap is shown in Figure 2. The following assumption quantifies the action-gap regularity.
Assumption A1 (Action-Gap). For a fixed MDP (X , A, P, R, ?) with |A| = 2, there exist constants cg > 0 and ? ? 0 such that for all t > 0, we have
Z
P?? (0 < gQ? (X) ? t) ,
I{0 < gQ? (x) ? t} d?? (x) ? cg t? .
X
The value of ? controls the distribution of the action-gap gQ? (X). A large value of ? indicates that
the probability that Q(X, 1) being very close to Q(X, 2) is small and vice versa. The smallness of
? might be rather inaccurate in a
this probability implies that the estimated action-value function Q
4
1
0.9
0.8
P (0 < gQ(X) ! t)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.25
0.5
0.75
1
t
1.25
1.5
1.75
2
Figure 3: The probability distribution P?? (0 < gQ? (X) ? t) for a 1D stochastic chain walk with
500 states and ? = 0.95. Here the probability of the action-gap being close to zero is small.
large subset of the state space (measured according to ?? ) but its corresponding greedy policy would
still be the same as the optimal one. The case of ? = 0 and cg = 1 is equivalent to not having
any assumption on the action-gap. This assumption is inspired by the low-noise condition in the
classification literature [5]. As an example of a typical behavior of an action-gap function, Figure 3
depicts P?? (0 < gQ? (X) ? t) for the same 1D stochastic chain walk problem as mentioned in the
Introduction. It is seen that the probability that the action-gap function gQ? being close to zero is
very small. Note that the specific polynomial form of the upper bound in Assumption A1 is only a
modeling assumption that captures the essence of the action-gap regularity without trying to be too
general to lead to unnecessarily complicated analyses.
As a result of the dynamical nature of MDP, the performance loss depends not only on the choice
of ? and ?? , but also on the transition probability kernel P . To analyze this dependence, we define
a concentrability coefficient and use a change of measure argument similar to the work of Munos
[23, 24], Antos et al. [10].
Definition 1 (Concentrability of the Future-State Distribution). Given ?, ?? ? M(X ), a policy ?,
and an integer m ? 0, let ?(P ? )m ? M(X ) denote the future-state distribution obtained when
the first state is distributed according to ? and we then follow the policy ? for m steps. Denote the
supremum of the Radon-Nikodym derivative of ?(P ? )m w.r.t. ?? by c(m; ?), i.e.,
d(?(P ? )m )
.
c(m; ?) ,
d??
?
If ?(P ? )m is not absolutely continuous w.r.t. ?? , we set c(m; ?) = ?. The concentrability of the
future-state distribution coefficient is defined as
X
C(?, ?? ) , sup
? m c(m; ?).
?
m?0
? p,?? ,
The following theorem upper bounds the performance loss Loss(?
? ; ?) as a function of kQ? ? Qk
the action-gap distribution, and the concentrability coefficient.
? of the optimal
Theorem 1. Consider an MDP (X , A, P, R, ?) with |A| = 2 and an estimate Q
?
action-value function. Let Assumption A1 hold and C(?, ? ) < ?. Denote ?
? as the greedy policy
? We then have
w.r.t. Q.
?
1+?
?
?
?21+? cg C(?, ?? )
Q
? Q?
,
?
Loss(?
? ; ?) ?
p(1+?)
p?1
p(1+?)
p+?
?
? ? Q?
?21+ p+? cgp+? C(?, ?? )
(1 ? p < ?)
Q
? .
p,?
5
?
Proof. Let function F : X ? R be defined as F (x) = V ? (x) ? V ?? (x) = Q? (x, ? ? (x)) ?
Q?? (x, ?
? (x)) for any x ? X . Note that Loss(?
? ; ?) = ?F . Decompose F (x) as
?
?
?
F (x) = Q? (x, ? ? (x)) ? Q? (x, ?
? (x)) + Q? (x, ?
? (x)) ? Q?? (x, ?
? (x)) = F1 (x) + F2 (x).
We have
Z
??
?
F2 (x) = r(x, ?
? (x)) + ?
P (dy|x, ?
? (x))Q (y, ? (y)) ?
X
Z
?
?
r(x, ?
? (x)) + ?
P (dy|x, ?
? (x))Q (y, ?
? (y))
X
= ?P ?? (?|x)F (?).
P
Therefore, F = (I ? ?P ?? )?1 F1 = m?0 (?P ?? )m F1 . Thus,
Z
X
X
?F =
?(?P ?? )m F1 =
?m
?(P ?? )m (dy)F1 (y)
m?0
=
X
?m
X
m?0
?
X
X
m?0
Z
d ?(P ?? )m
(y)d?? (y)F1 (y)
d??
? m c(m; ?
? )?? F1 ? C(?, ?? ) ?? F1 .
(2)
m?0
in which we used the Radon-Nikodym theorem and the definition of concentrability coefficient. Let
us turn to F1 and provide an upper bound for it. We use techniques similar to [5].
L? result: Note that for any given x ? X , if for some value of ? > 0 we have ?
? (x) 6= ? ? (x)
??
? a)| ? ? (for both a = 1, 2), then it holds that gQ? (x) = |Q?? (x, 1) ?
and |Q (x, a) ? Q(x,
?
?
?
Q? (x, 2)| ? 2?. To show it, suppose that instead gQ? (x) = |Q? (x, 1) ? Q? (x, 2)| > 2?.
?
? a)| ? ? (a = 1, 2), the ordering of Q(x,
? 1) and
Then because of the assumption |Q? (x, a) ? Q(x,
?
?
?
Q(x, 2) is the same as the ordering of Q (x, 1) and Q (x, 2), which contradicts the assumption that
?
? (x) 6= ? ? (x) (see Figure 2).
?
? ? . Whenever ?
Denote ?0 = kQ? ? Qk
? (x) = ? ? (x), the value of F1 (x) is zero, so we get
h ?
i
?
F1 (x) = Q? (x, ? ? (x)) ? Q? (x, ?
? (x)) [I{?
? (x) = ? ? (x)} + I{?
? (x) 6= ? ? (x)}]
h ?
i
?
= Q? (x, ? ? (x)) ? Q? (x, 1 ? ? ? (x)) I{?
? (x) 6= ? ? (x)}
? [I{gQ? (x) = 0} + I{0 < gQ? (x) ? 2?0 } + I{gQ? (x) > 2?0 }]
? 0 + 2?0 I{0 < gQ? (x) ? 2?0 } + 0.
Here we used the definition of gQ? (x) and the fact that gQ? (x) is no larger than 2?0 . This result
together with Assumption A1 show that ?? F1 ? 2?0 P?? (0 < gQ? (X) ? 2?0 ) ? 2?0 cg (2?0 )? .
Plugging this result in (2) finishes the proof of the first part.
?
?
? 1)| + |Q? (x, 2) ? Q(x,
? 2)|.
Lp result: For any given x ? X , let D(x) = |Q? (x, 1) ? Q(x,
?
Whenever ?
? (x) 6= ? (x), we have gQ? (x) ? D(x). Similar to the previous case, we have
h ?
i
?
F1 (x) = Q? (x, ? ? (x)) ? Q? (x, 1 ? ? ? (x)) I{?
? (x) 6= ? ? (x)}
? [I{gQ? (x) = 0} + I{0 < gQ? (x) ? t} + I{gQ? (x) > t}]
? D(x) [I{0 < gQ? (x) ? t} + I{gQ? (x) > t}]
Take expectation w.r.t. ?? and use H?older?s inequality to get
p?1
?? F1 ? kDkp,?? [P?? (0 < gQ? (X) ? t)] p + kDkp,?? [P?? (gQ? (X) > t)]
p?1
p?1
? kDkp,?? cg t? p + kDkp,?? [P?? (D(X) > t)] p
? kDkp,?? cg t?
p?1
p
p
+
kDkp,??
tp?1
.
6
p?1
p
where we used Assumption A1 and the definition of D(?) in the second inequality, and Markov?s
?1
p
p+?
inequality in the last one. Minimize the upper bound in t to get t = cgp+? kDkp,?
? . This leads to
p?1
p(1+?)
p
?
p+?
? p ?
?? F1 ? 2cgp+? kDkp,?
, which in turn alongside inequality (2) and kDkp,?? ? 2p kQ? ? Qk
?
p,?
proves the second part of this result.
? ? Q? kp (1 < p ? ?) has an error upper bound of O(n?? ) (with
This theorem indicates that if kQ
? typically in the range of (0, 1/2] depending on the properties of the MDP and the estimator), we
obtain faster convergence upper bounds on the performance loss Loss(?
? ; ?) whenever the problem
has an action-gap regularity (? > 0).
?
2?
One might compare Theorem 1 with classical upper bounds such as kV ?? ? V ? k? ? 1??
kV? ?
?
V k? (Proposition 6.1 of Bertsekas and Tsitsiklis [2]). In order to make these two bounds comparable, we slightly modify the proof of our theorem to get the L? -norm in the left hand side. The
21+? c
? ? Q? k1+?
result would be kV ? ? V ?? k? ? 1?? g kQ
? . If there is no action-gap assumption (? = 0
and cg = 1), the results are similar (except for a factor of ? and that we measure the error by the
difference in the action-value function as opposed to the value function), but when ? > 0 the error
bound significantly improves.
4
Application of the Action-Gap Theorem in Approximate Value Iteration
The goal of this section is to show how the analysis based on the action-gap phenomenon might lead
to a tighter upper bound on the performance loss for the family of the AVI algorithms. There are
various AVI algorithms (Riedmiller [19], Ernst et al. [20], Munos and Szepesv?ari [11], Farahmand
? k )K , in
et al. [12]), that work by generating a sequence of action-value function estimates (Q
k=0
?
which Qk+1 is the result of approximately applying the Bellman optimality operator to the previous
? k . Let us denote the error caused at each iteration by
? k , i.e., Q
? k+1 ? T ? Q
estimate Q
?k ? Q
? k+1 .
?k , T ? Q
(3)
The following theorem, which is based on Theorem 3 of Farahmand et al. [25], relates the per?
? K ) to the error sequence
formance loss kQ?? (?;QK ) ? Q? k1,? of the obtained greedy policy ?
? (?; Q
K?1
(?k )k=0 and the action-gap assumption on the MDP. Before stating the theorem, we define the
following sequence:
( (1??)
K?k?1
0 ? k < K,
K+1 ?
?k = 1??
(1??)
K
?
k = K.
1?? K+1
P
K
This sequence has ?k ? ? K?k behavior and satisfies k=0 ?k = 1.
Theorem 2 (Error Propagation for AVI). Consider an MDP (X , A, P, R, ?) with |A| = 2 that
satisfies Assumption A1 and has C(?, ?? ) < ?. Let p ? 1 be a real number and K be a positive
? k )K ? B(X ? A, Qmax ) and the corresponding sequence
integer. Then for any sequence (Q
k=0
K?1
(?k )k=0 defined in (3), we have
1+?
"K?1
# p+?
p(1+?)
p?1
p+?
X
2
p
Loss(?
? (?, QK ); ?) ? 2
cgp+? C(?, ?? )
?k k?k kp,?? + ?K (2Qmax )p
.
1??
k=0
Proof. Similar to Lemma 4.1 of Munos [24], one may derive
? k+1 = T ?? Q? ? T ?? Q
? k + T ?? Q
?k ? T ?Q
? k + ?k ? ?P ?? (Q? ? Q
? k ) + ?k
Q? ? Q
? k ? T ?? Q
? k and the definition
where we used the property of the Bellman optimality operator T ? Q
of ?k (3). By induction, we get
?K ?
Q? ? Q
K?1
X
?
?
? 0 ).
? K?k?1 (P ? )K?k?1 ?k + ? K (P ? )K (Q? ? Q
k=0
7
? K kp,?? = ?? |Q? ? Q
? K |p is upper bounded by
Therefore, for any p ? 1, the value of kQ? ? Q
#p
p "K?1
X
1 ? ? K+1
?
?
p
?
? ? K?k?1
?
?? K
?
?
?
? |Q ? QK | ?
?k ? (P )
|?k | + ?K ? (P ) |Q ? Q0 |
1??
k=0
#
p "K?1
X
1 ? ? K+1
p
?
?k k?k kp,?? + ?K (2Qmax )p ,
1??
k=0
?
?? m
?
where we used ? (P ) = ? (for any m ? 0) and Jensen?s inequality. The application of
Theorem 1 and noting that (1 ? ? K+1 )/(1 ? ?) ? 1/(1 ? ?) lead to the desired result.
Comparing this theorem with Theorem 3 of Farahmand et al. [25] is instructive. Denoting E =
PK?1
2
? K ); ?)
? (?, Q
k=0 ?k k?k k2,?? , this paper?s result indicates that the effect of the size of ?k on Loss(?
1+?
depends on E 2+? , while [25], which does not consider the action-gap regularity, suggests that the
effect depends on E 1/2 . For ? > 0, this indicates a faster convergence rate for the performance loss
while for ? = 0, they are the same.
5
Conclusion
This work introduced the action-gap regularity in reinforcement learning and planning problems
and analyzed the action-gap phenomenon for two-action discounted MDPs. We showed that when
the problem has a favorable action-gap regularity, quantified by the parameter ?, the performance
loss is much smaller than the error of the estimated optimal action-value function. The action-gap
regularity, among other regularities such as the smoothness of the action-value function [13], is a
step forward to better understanding of what properties of a sequential decision-making problem
makes learning and planning easy or difficult.
There are several issues that deserve to be studied in the future. Among them is the extension of
the current framework to multi-action discounted MDPs. Also it is important to study the relation
between the parameter ? of the action-gap regularity assumption to the properties of the MDP such
as the transition probability kernel and the reward distribution.
Acknowledgments
I thank the anonymous reviewers for their useful comments. This work was partly supported by
AICML and NSERC.
References
[1] Michail G. Lagoudakis and Ronald Parr. Least-squares policy iteration. Journal of Machine
Learning Research, 4:1107?1149, 2003.
[2] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming (Optimization and
Neural Computation Series, 3). Athena Scientific, 1996.
[3] Enno Mammen and Alexander B. Tsybakov. Smooth discrimination analysis. The Annals of
Statistics, 27(6):1808?1829, 1999.
[4] Alexander B. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals
of Statistics, 32 (1):135?166, 2004.
[5] Jean-Yves Audibert and Alexander B. Tsybakov. Fast learning rates for plug-in classifiers. The
Annals of Statistics, 35(2):608?633, 2007.
[6] Alessandro Rinaldo and Larry Wasserman. Generalized density clustering. The Annals of
Statistics, 38(5):2678?2722, 2010.
[7] Michail G. Lagoudakis and Ronald Parr. Reinforcement learning as classification: Leveraging
modern classifiers. In ICML ?03: Proceedings of the 20th international conference on Machine
learning, pages 424?431, 2003.
8
[8] Alessandro Lazaric, Mohammad Ghavamzadeh, and R?emi Munos. Analysis of a classificationbased policy iteration algorithm. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 607?614. Omnipress, 2010.
[9] Omar Syed and Robert E. Schapire. A reduction from apprenticeship learning to classification.
In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances
in Neural Information Processing Systems (NIPS - 23), pages 2253?2261, 2010.
[10] Andr?as Antos, Csaba Szepesv?ari, and R?emi Munos. Learning near-optimal policies with
Bellman-residual minimization based fitted policy iteration and a single sample path. Machine
Learning, 71:89?129, 2008.
[11] R?emi Munos and Csaba Szepesv?ari. Finite-time bounds for fitted value iteration. Journal of
Machine Learning Research, 9:815?857, 2008.
[12] Amir-massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesv?ari, and Shie Mannor.
Regularized fitted Q-iteration for planning in continuous-space Markovian Decision Problems.
In Proceedings of American Control Conference (ACC), pages 725?730, June 2009.
[13] Amir-massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesv?ari, and Shie Mannor.
Regularized policy iteration. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors,
Advances in Neural Information Processing Systems (NIPS - 21), pages 441?448. MIT Press,
2009.
[14] Odalric Maillard, R?emi Munos, Alessandro Lazaric, and Mohammad Ghavamzadeh. Finitesample analysis of Bellman residual minimization. In Proceedings of the Second Asian Conference on Machine Learning (ACML), 2010.
[15] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction (Adaptive
Computation and Machine Learning). The MIT Press, 1998.
[16] Csaba Szepesv?ari. Algorithms for Reinforcement Learning. Morgan Claypool Publishers,
2010.
[17] Hamid Reza Maei, Csaba Szepesv?ari, Shalabh Bhatnagar, and Richard S. Sutton. Toward
off-policy learning control with function approximation. In Johannes F?urnkranz and Thorsten
Joachims, editors, Proceedings of the 27th International Conference on Machine Learning
(ICML-10), pages 719?726, Haifa, Israel, June 2010. Omnipress.
[18] J. Zico Kolter and Andrew Y. Ng. Regularization and feature selection in least-squares temporal difference learning. In ICML ?09: Proceedings of the 26th Annual International Conference
on Machine Learning, pages 521?528. ACM, 2009.
[19] Martin Riedmiller. Neural fitted Q iteration ? first experiences with a data efficient neural
reinforcement learning method. In 16th European Conference on Machine Learning, pages
317?328, 2005.
[20] Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement
learning. Journal of Machine Learning Research, 6:503?556, 2005.
[21] Daniela Pucci de Farias and Benjamin Van Roy. The linear programming approach to approximate dynamic programming. Operations Research, 51(6):850?865, 2003.
[22] Marek Petrik and Shlomo Zilberstein. Constraint relaxation in approximate linear programs.
In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ?09,
pages 809?816, New York, NY, USA, 2009. ACM.
[23] R?emi Munos. Error bounds for approximate policy iteration. In ICML 2003: Proceedings of
the 20th Annual International Conference on Machine Learning, pages 560?567, 2003.
[24] R?emi Munos. Performance bounds in Lp norm for approximate value iteration. SIAM Journal
on Control and Optimization, pages 541?561, 2007.
[25] Amir-massoud Farahmand, R?emi Munos, and Csaba Szepesv?ari. Error propagation for approximate policy and value iteration. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S.
Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems (NIPS 23), pages 568?576. 2010.
9
| 4485 |@word polynomial:1 achievable:1 norm:6 reduction:1 initial:1 configuration:1 series:1 denoting:1 current:2 comparing:1 dx:1 john:1 ronald:2 shlomo:1 enables:1 discrimination:1 stationary:1 greedy:15 half:1 selected:4 amir:4 short:1 provides:1 mannor:2 dn:2 farahmand:8 prove:2 manner:1 apprenticeship:2 ra:1 expected:1 indeed:1 behavior:7 planning:6 multi:1 bellman:8 inspired:1 discounted:4 td:1 becomes:1 bounded:3 notation:2 what:5 mountain:1 israel:1 maxa:1 developed:1 csaba:7 temporal:1 every:1 stateaction:1 k2:1 classifier:3 control:4 zico:1 enjoy:1 louis:1 bertsekas:4 positive:2 before:1 modify:1 sutton:3 path:1 approximately:2 might:4 studied:1 quantified:1 suggests:2 range:1 acknowledgment:1 regret:1 procedure:2 riedmiller:2 significantly:1 argmaxa:1 suggest:2 get:5 close:5 selection:1 operator:10 applying:1 www:1 measurable:7 deterministic:2 equivalent:1 reviewer:1 williams:2 focused:1 wasserman:2 estimator:1 borrow:1 mcgill:1 annals:4 suppose:3 user:3 exact:1 programming:4 roy:1 observed:2 negligibly:1 solved:1 capture:1 region:2 culotta:2 ordering:5 decrease:1 mentioned:1 alessandro:3 benjamin:1 complexity:1 reward:3 inductively:1 dynamic:2 ghavamzadeh:4 algebra:1 petrik:1 f2:2 farias:1 various:2 fast:3 kp:7 zemel:2 avi:5 choosing:1 outcome:1 jean:1 larger:2 solve:2 say:2 statistic:4 online:1 sequence:7 net:1 gq:27 relevant:1 ernst:2 kv:6 convergence:6 regularity:22 r1:1 generating:1 incremental:1 depending:2 derive:1 montreal:1 clearer:1 stating:1 measured:1 andrew:2 school:1 damien:1 solves:1 indicate:1 implies:1 quantify:1 rfor:1 correct:1 stochastic:4 lars:1 larry:1 maxa0:1 f1:15 decompose:1 anonymous:1 proposition:2 tighter:1 hamid:1 sarsa:1 extension:1 hold:4 claypool:1 mapping:1 parr:3 achieves:1 enno:1 estimation:2 favorable:2 label:1 vice:1 minimization:2 mit:2 always:1 rather:1 barto:2 kdkp:9 zilberstein:1 focus:2 june:2 joachim:1 indicates:6 mainly:1 cg:8 inaccurate:2 typically:1 a0:1 borrowing:1 relation:1 koller:1 i1:1 issue:1 classification:9 among:2 having:1 ng:1 unnecessarily:1 icml:6 future:4 simplify:1 richard:2 modern:1 asian:1 interest:1 evaluation:1 introduces:1 analyzed:1 antos:3 chain:5 tuple:1 experience:1 tree:1 taylor:2 walk:4 desired:1 haifa:1 theoretical:3 fitted:5 modeling:1 markovian:1 tp:1 introducing:1 subset:5 kq:11 too:1 supx:1 considerably:2 density:1 international:6 siam:1 off:1 together:1 again:1 opposed:1 choose:1 possibly:1 dr:2 worse:1 american:1 derivative:1 dimitri:1 de:1 coefficient:4 notable:1 caused:2 audibert:3 depends:3 kolter:1 analyze:1 sup:4 aggregation:1 complicated:1 minimize:1 square:2 yves:1 qk:7 who:1 formance:1 bhatnagar:1 ary:1 acc:1 explain:1 reach:2 concentrability:5 whenever:4 definition:6 proof:4 gain:1 dataset:1 car:1 improves:1 maillard:2 formalize:2 obtainable:1 follow:1 specify:1 though:3 just:1 hand:2 receives:1 maximizer:1 propagation:2 mode:1 quality:4 indicated:1 scientific:1 mdp:12 shalabh:1 effect:5 usa:1 concept:2 regularization:1 q0:1 essence:1 mammen:2 generalized:1 trying:1 mohammad:4 geurts:1 omnipress:2 ari:10 lagoudakis:3 kqkp:2 rl:5 reza:1 slight:1 approximates:1 versa:1 smoothness:1 similarly:3 hp:2 shawe:2 had:2 showed:1 inequality:5 morgan:1 seen:1 michail:2 relates:1 multiple:1 x10:1 cgp:4 smooth:1 faster:5 plug:1 believed:1 a1:9 plugging:1 variant:1 neuro:1 expectation:1 iteration:22 kernel:5 szepesv:10 publisher:1 rest:1 comment:1 induced:1 shie:2 quebec:1 leveraging:1 lafferty:2 practitioner:1 call:2 integer:2 near:1 presence:1 noting:1 bengio:1 easy:2 affect:2 finish:1 suboptimal:1 idea:1 passing:1 york:1 action:84 ignored:1 useful:1 johannes:1 discount:2 tsybakov:7 induces:1 schapire:2 exist:2 massoud:4 andr:1 estimated:11 lazaric:3 per:1 write:1 shall:1 urnkranz:1 threshold:1 nevertheless:1 drawn:3 kqk:1 relaxation:1 qmax:3 family:2 reader:1 decision:3 dy:12 radon:2 comparable:1 bound:15 annual:3 constraint:1 ri:1 aspect:1 emi:7 argument:1 optimality:3 martin:1 according:3 smaller:3 slightly:1 contradicts:1 lp:5 modification:1 making:1 thorsten:1 turn:2 daniela:1 available:1 operation:1 pierre:1 batch:2 denotes:2 clustering:1 cf:2 wehenkel:1 k1:4 prof:1 classical:1 lspi:2 question:1 quantity:1 rt:3 dependence:1 thank:1 athena:1 omar:1 odalric:1 reason:1 induction:1 toward:1 aicml:1 difficult:2 robert:1 policy:38 perform:1 upper:11 markov:3 finite:4 acml:1 rn:1 arbitrary:1 canada:1 introduced:2 complement:1 maei:1 specified:1 connection:1 nip:3 address:1 deserve:1 suggested:1 alongside:1 dynamical:1 program:1 marek:1 critical:1 syed:2 regularized:2 residual:2 smallness:1 older:1 brief:1 mdps:3 picture:1 started:1 understanding:2 literature:5 finitesample:1 relative:3 loss:27 expect:1 agent:6 editor:4 nikodym:2 summary:1 supported:1 last:1 tsitsiklis:4 offline:1 side:1 munos:11 benefit:1 distributed:1 van:1 xn:1 transition:6 forward:1 reinforcement:10 adaptive:1 oftentimes:2 far:3 approximate:8 supremum:2 synergy:1 continuous:2 quantifies:1 nature:2 schuurmans:1 bottou:1 european:1 pk:1 main:1 noise:4 x1:5 referred:1 depicts:1 ny:1 theorem:19 xt:3 specific:1 jensen:1 maximizers:1 sequential:1 importance:2 classificationbased:1 illustrates:1 margin:1 gap:41 xt0:1 rinaldo:2 nserc:1 pucci:1 satisfies:2 acm:2 conditional:1 goal:2 quantifying:1 change:1 typical:3 except:1 lemma:1 called:2 partly:1 xn0:1 alexander:3 absolutely:1 phenomenon:6 reg:1 instructive:1 |
3,850 | 4,486 | Divide-and-Conquer Matrix Factorization
Lester Mackeya
a
Ameet Talwalkara
Michael I. Jordana, b
Department of Electrical Engineering and Computer Science, UC Berkeley
b
Department of Statistics, UC Berkeley
Abstract
This work introduces Divide-Factor-Combine (DFC), a parallel divide-andconquer framework for noisy matrix factorization. DFC divides a large-scale
matrix factorization task into smaller subproblems, solves each subproblem in parallel using an arbitrary base matrix factorization algorithm, and combines the subproblem solutions using techniques from randomized matrix approximation. Our
experiments with collaborative filtering, video background modeling, and simulated data demonstrate the near-linear to super-linear speed-ups attainable with
this approach. Moreover, our analysis shows that DFC enjoys high-probability
recovery guarantees comparable to those of its base algorithm.
1 Introduction
The goal in matrix factorization is to recover a low-rank matrix from irrelevant noise and corruption. We focus on two instances of the problem: noisy matrix completion, i.e., recovering a low-rank
matrix from a small subset of noisy entries, and noisy robust matrix factorization [2, 3, 4], i.e., recovering a low-rank matrix from corruption by noise and outliers of arbitrary magnitude. Examples
of the matrix completion problem include collaborative filtering for recommender systems, link prediction for social networks, and click prediction for web search, while applications of robust matrix
factorization arise in video surveillance [2], graphical model selection [4], document modeling [17],
and image alignment [21].
These two classes of matrix factorization problems have attracted significant interest in the research
community. In particular, convex formulations of noisy matrix factorization have been shown to admit strong theoretical recovery guarantees [1, 2, 3, 20], and a variety of algorithms (e.g., [15, 16, 23])
have been developed for solving both matrix completion and robust matrix factorization via convex
relaxation. Unfortunately, these methods are inherently sequential and all rely on the repeated and
costly computation of truncated SVDs, factors that limit the scalability of the algorithms.
To improve scalability and leverage the growing availability of parallel computing architectures, we
propose a divide-and-conquer framework for large-scale matrix factorization. Our framework, entitled Divide-Factor-Combine (DFC), randomly divides the original matrix factorization task into
cheaper subproblems, solves those subproblems in parallel using any base matrix factorization algorithm, and combines the solutions to the subproblem using efficient techniques from randomized
matrix approximation. The inherent parallelism of DFC allows for near-linear to superlinear speedups in practice, while our theory provides high-probability recovery guarantees for DFC comparable
to those enjoyed by its base algorithm.
The remainder of the paper is organized as follows. In Section 2, we define the setting of noisy matrix factorization and introduce the components of the DFC framework. To illustrate the significant
speed-up and robustness of DFC and to highlight the effectiveness of DFC ensembling, we present
experimental results on collaborative filtering, video background modeling, and simulated data in
Section 3. Our theoretical analysis follows in Section 4. There, we establish high-probability noisy
recovery guarantees for DFC that rest upon a novel analysis of randomized matrix approximation
and a new recovery result for noisy matrix completion.
1
Notation For M ? Rm?n , we define M(i) as the ith row vector and Mij as the ijth entry. If rank(M) = r, we write the compact singular value decomposition (SVD) of M as
"
UM ?M VM
, where ?M is diagonal and contains the r non-zero singular values of M, and
m?r
UM ? R
and VM ? Rn?r are the corresponding left and right singular vectors of M. We
+
"
+
define M = VM ??1
M UM as the Moore-Penrose pseudoinverse of M and PM = MM as the
orthogonal projection onto the column space of M. We let "?"2 , "?"F , and "?"? respectively denote
the spectral, Frobenius, and nuclear norms of a matrix and let "?" represent the !2 norm of a vector.
2 The Divide-Factor-Combine Framework
In this section, we present our divide-and-conquer framework for scalable noisy matrix factorization.
We begin by defining the problem setting of interest.
2.1 Noisy Matrix Factorization (MF)
In the setting of noisy matrix factorization, we observe a subset of the entries of a matrix M =
L0 + S0 + Z0 ? Rm?n , where L0 has rank r # m, n, S0 represents a sparse matrix of outliers of
arbitrary magnitude, and Z0 is a dense noise matrix. We let ? represent the locations of the observed
entries and P? be the orthogonal projection onto the space of m ? n matrices with support ?, so
that
(P? (M))ij = Mij , if (i, j) ? ? and (P? (M))ij = 0 otherwise.
Our goal is to recover the low-rank matrix L0 from P? (M) with error proportional to the noise level
? ! "Z0 "F . We will focus on two specific instances of this general problem:
? Noisy Matrix Completion (MC): s ! |?| entries of M are revealed uniformly without
replacement, along with their locations. There are no outliers, so that S0 is identically zero.
? Noisy Robust Matrix Factorization (RMF): S0 is identically zero save for s outlier entries of arbitrary magnitude with unknown locations distributed uniformly without replacement. All entries of M are observed, so that P? (M) = M.
2.2 Divide-Factor-Combine
Algorithms 1 and 2 summarize two canonical examples of the general Divide-Factor-Combine
framework that we refer to as DFC-P ROJ and DFC-N YS. Each algorithm has three simple steps:
(D step) Divide input matrix into submatrices: DFC-P ROJ randomly partitions P? (M) into t lcolumn submatrices, {P? (C1 ), . . . , P? (Ct )}1 , while DFC-N YS selects an l-column submatrix, P? (C), and a d-row submatrix, P? (R), uniformly at random.
(F step) Factor each submatrix in parallel using any base MF algorithm: DFC-P ROJ performs
t parallel submatrix factorizations, while DFC-N YS performs two such parallel factoriza? 1, . . . , C
? t } for
tions. Standard base MF algorithms output the low-rank approximations {C
?
?
DFC-P ROJ and C, and R for DFC-N YS. All matrices are retained in factored form.
? proj by
(C step) Combine submatrix estimates: DFC-P ROJ generates a final low-rank estimate L
?
?
?
projecting [C1 , . . . , Ct ] onto the column space of C1 , while DFC-N YS forms the low? nys from C
? and R
? via the generalized Nystr?om method. These matrix
rank estimate L
approximation techniques are described in more detail in Section 2.3.
2.3 Randomized Matrix Approximations
Our divide-and-conquer algorithms rely on two methods that generate randomized low-rank approximations to an arbitrary matrix M from submatrices of M.
1
For ease of discussion, we assume that mod(n, t) = 0, and hence, l = n/t. Note that for arbitrary n and
t, P? (M) can always be partitioned into t submatrices, each with either !n/t" or #n/t$ columns.
2
Algorithm 1 DFC-P ROJ
Input: P? (M), t
{P? (Ci )}1?i?t = S AMP C OL(P? (M), t)
do in parallel
? 1 = BASE -MF-A LG(P? (C1 ))
C
..
.
? t = BASE -MF-A LG(P? (Ct ))
C
end do
? proj = C OL P ROJECTION(C
? 1, . . . , C
? t)
L
Algorithm 2 DFC-N YSa
Input: P? (M), l, d
P? (C) , P? (R) = S AMP C OL ROW(P? (M), l, d)
do in parallel
? = BASE -MF-A LG(P? (C))
C
? = BASE -MF-A LG(P? (R))
R
end do
? nys = G EN N YSTR OM
? R)
?
? (C,
L
a
When Q is a submatrix of M we abuse notation and
define P? (Q) as the corresponding submatrix of P? (M).
Column Projection This approximation, introduced by Frieze et al. [7], is derived from column
sampling of M. We begin by sampling l < n columns uniformly without replacement and let C
be the m ? l matrix of sampled columns. Then, column projection uses C to generate a ?matrix
projection? approximation [13] of M as follows:
Lproj = CC+ M = UC U"
C M.
In practice, we do not reconstruct Lproj but rather maintain low-rank factors, e.g., UC and U"
C M.
Generalized Nystr?om Method The standard Nystr?om method is often used to speed up largescale learning applications involving symmetric positive semidefinite (SPSD) matrices [24] and has
been generalized for arbitrary real-valued matrices [8]. In particular, after sampling columns to
obtain C, imagine that we independently sample d < m rows uniformly without replacement. Let
R be the d ? n matrix of sampled rows and W be the d ? l matrix formed from the intersection
of the sampled rows and columns. Then, the generalized Nystr?om method uses C, W, and R to
compute an ?spectral reconstruction? approximation [13] of M as follows:
"
Lnys = CW+ R = CVW ?+
W UW R .
"
As with Mproj , we store low-rank factors of Lnys , such as CVW ?+
W and UW R.
2.4 Running Time of DFC
Many state-of-the-art MF algorithms have ?(mnkM ) per-iteration time complexity due to the rankkM truncated SVD performed on each iteration. DFC significantly reduces the per-iteration complexity to O(mlkCi ) time for Ci (or C) and O(ndkR ) time for R. The cost of combining the
submatrix estimates is even smaller, since the outputs of standard MF algorithms are returned in factored form. Indeed, the column projection step of DFC-P ROJ requires only O(mk 2 + lk 2 ) time for
? 1 and O(mk 2 + lk 2 ) time for mak ! maxi kCi : O(mk 2 + lk 2 ) time for the pseudoinversion of C
?
trix multiplication with each Ci in parallel. Similarly, the generalized Nystr?om step of DFC-N YS
requires only O(lk?2 + dk?2 + min(m, n)k?2 ) time, where k? ! max(kC , kR ). Hence, DFC divides
the expensive task of matrix factorization into smaller subproblems that can be executed in parallel
and efficiently combines the low-rank, factored results.
2.5 Ensemble Methods
Ensemble methods have been shown to improve performance of matrix approximation algorithms,
while straightforwardly leveraging the parallelism of modern many-core and distributed architectures [14]. As such, we propose ensemble variants of the DFC algorithms that demonstrably reduce
recovery error while introducing a negligible cost to the parallel running time. For DFC-P ROJ ? 1 , we project [C
? 1, . . . , C
? t ] onto the
E NS, rather than projecting only onto the column space of C
? i in parallel and then average the t resulting low-rank approximations. For
column space of each C
DFC-N YS -E NS, we choose a random d-row submatrix P? (R) as in DFC-N YS and independently
partition the columns of P? (M) into {P? (C1 ), . . . , P? (Ct )} as in DFC-P ROJ. After running the
3
? i , R)
?
base MF algorithm on each submatrix, we apply the generalized Nystr?om method to each (C
pair in parallel and average the t resulting low-rank approximations. Section 3 highlights the empirical effectiveness of ensembling.
3 Experimental Evaluation
We now explore the accuracy and speed-up of DFC on a variety of simulated and real-world datasets.
We use state-of-the-art matrix factorization algorithms in our experiments: the Accelerated Proximal
Gradient (APG) algorithm of [23] as our base noisy MC algorithm and the APG algorithm of [15] as
our base noisy RMF algorithm. In all experiments, we use the default parameter settings suggested
by [23] and [15], measure recovery error via root mean square error (RMSE), and report parallel
running times for DFC. We moreover compare against two baseline methods: APG used on the full
matrix M and PARTITION, which performs matrix factorization on t submatrices just like DFCP ROJ but omits the final column projection step.
3.1 Simulations
For our simulations, we focused on square matrices (m = n) and generated random low-rank and
sparse decompositions, similar to the schemes used in related work, e.g., [2, 12, 25]. We created
"
L0 ? Rm?m
! as a random product, AB , where A and B are m ? r matrices with independent N (0, 1/r) entries such that each entry of L0 has unit variance. Z0 contained independent
N (0, 0.1) entries. In the MC setting, s entries of L0 + Z0 were revealed uniformly at random. In
the RMF setting, the support of S0 was generated uniformly at random, and the s corrupted entries
took values in [0, 1] with uniform probability. For each algorithm, we report error between L0 and
the recovered low-rank matrix, and all reported results are averages over five trials.
MC
RMF
0.25
Part?10%
Proj?10%
Nys?10%
Proj?Ens?10%
Nys?Ens?10%
Proj?Ens?25%
Base?MC
RMSE
0.2
0.15
0.1
0.2
RMSE
0.25
0.05
0
0
0.15
Part?10%
Proj?10%
Nys?10%
Proj?Ens?10%
Nys?Ens?10%
Base?RMF
0.1
0.05
2
4
6
8
0
0
10
% revealed entries
10
20
30
40
% of outliers
50
60
70
Figure 1: Recovery error of DFC relative to base algorithms.
We first explored the recovery error of DFC as a function of s, using (m = 10K, r = 10) with
varying observation sparsity for MC and (m = 1K, r = 10) with a varying percentage of outliers
for RMF. The results are summarized in Figure 1.2 In both MC and RMF, the gaps in recovery
between APG and DFC are small when sampling only 10% of rows and columns. Moreover, DFCP ROJ -E NS in particular consistently outperforms PARTITION and DFC-N YS -E NS and matches the
performance of APG for most settings of s.
We next explored the speed-up of DFC as a function of matrix size. For MC, we revealed 4% of
the matrix entries and set r = 0.001 ? m, while for RMF we fixed the percentage of outliers to 10%
and set r = 0.01 ? m. We sampled 10% of rows and columns and observed that recovery errors
were comparable to the errors presented in Figure 1 for similar settings of s; in particular, at all
values of n for both MC and RMF, the errors of APG and DFC-P ROJ -E NS were nearly identical.
Our timing results, presented in Figure 2, illustrate a near-linear speed-up for MC and a superlinear
speed-up for RMF across varying matrix sizes. Note that the timing curves of the DFC algorithms
and PARTITION all overlap, a fact that highlights the minimal computational cost of the final matrix
approximation step.
2
In the left-hand plot of Figure 1, the lines for Proj-10% and Proj-Ens-10% overlap.
4
MC
RMF
10000
3000
Part?10%
Proj?10%
Nys?10%
Proj?Ens?10%
Nys?Ens?10%
Base?RMF
time (s)
2000
1500
1000
6000
4000
2000
500
0
1.5
Part?10%
Proj?10%
Nys?10%
Proj?Ens?10%
Nys?Ens?10%
Base?RMF
8000
time (s)
2500
2
2.5
3
3.5
4
4.5
m
0
5
4
1000
2000
3000
m
x 10
4000
5000
Figure 2: Speed-up of DFC relative to base algorithms.
3.2 Collaborative Filtering
Collaborative filtering for recommender systems is one prevalent real-world application of noisy
matrix completion. A collaborative filtering dataset can be interpreted as the incomplete observation
of a ratings matrix with columns corresponding to users and rows corresponding to items. The goal
is to infer the unobserved entries of this ratings matrix. We evaluate DFC on two of the largest
publicly available collaborative filtering datasets: MovieLens 10M3 (m = 4K, n = 6K, s > 10M)
and the Netflix Prize dataset4 (m = 18K, n = 480K, s > 100M). To generate test sets drawn
from the training distribution, for each dataset, we aggregated all available rating data into a single
training set and withheld test entries uniformly at random, while ensuring that at least one training
observation remained in each row and column. The algorithms were then run on the remaining
training portions and evaluated on the test portions of each split. The results, averaged over three
train-test splits, are summarized in Table 3.2. Notably, DFC-P ROJ, DFC-P ROJ -E NS, and DFCN YS -E NS all outperform PARTITION, and DFC-P ROJ -E NS performs comparably to APG while
providing a nearly linear parallel time speed-up. The poorer performance of DFC-N YS can be in
part explained by the asymmetry of these problems. Since these matrices have many more columns
than rows, MF on column submatrices is inherently easier than MF on row submatrices, and for
? is an accurate estimate while R
? is not.
DFC-N YS, we observe that C
Table 1: Performance of DFC relative to APG on collaborative filtering tasks.
Method
MovieLens 10M
Netflix
RMSE
Time
RMSE
Time
APG
0.8005
294.3s
0.8433
2653.1s
PARTITION-25%
PARTITION-10%
0.8146
0.8461
77.4s
36.0s
0.8451
0.8492
689.1s
289.2s
DFC-N YS-25%
DFC-N YS-10%
DFC-N YS -E NS-25%
DFC-N YS -E NS-10%
0.8449
0.8769
0.8085
0.8327
77.2s
53.4s
84.5s
63.9s
0.8832
0.9224
0.8486
0.8613
890.9s
487.6s
964.3s
546.2s
DFC-P ROJ-25%
DFC-P ROJ-10%
DFC-P ROJ -E NS-25%
DFC-P ROJ -E NS-10%
0.8061
0.8272
0.7944
0.8119
77.4s
36.1s
77.4s
36.1s
0.8436
0.8484
0.8411
0.8433
689.5s
289.7s
689.5s
289.7s
3.3 Background Modeling
Background modeling has important practical ramifications for detecting activity in surveillance
video. This problem can be framed as an application of noisy RMF, where each video frame is
a column of some matrix (M), the background model is low-rank (L0 ), and moving objects and
3
4
http://www.grouplens.org/
http://www.netflixprize.com/
5
background variations, e.g., changes in illumination, are outliers (S0 ). We evaluate DFC on two
videos: ?Hall? (200 frames of size 176 ? 144) contains significant foreground variation and was
studied by [2], while ?Lobby? (1546 frames of size 168?120) includes many changes in illumination
(a smaller video with 250 frames was studied by [2]). We focused on DFC-P ROJ -E NS, due to its
superior performance in previous experiments, and measured the RMSE between the background
model recovered by DFC and that of APG. On both videos, DFC-P ROJ -E NS recovered nearly the
same background model as the full APG algorithm in a small fraction of the time. On ?Hall,? the
DFC-P ROJ -E NS-5% and DFC-P ROJ -E NS-0.5% models exhibited RMSEs of 0.564 and 1.55, quite
small given pixels with 256 intensity values. The associated runtime was reduced from 342.5s for
APG to real-time (5.2s for a 13s video) for DFC-P ROJ -E NS-0.5%. Snapshots of the results are
presented in Figure 3. On ?Lobby,? the RMSE of DFC-P ROJ -E NS-4% was 0.64, and the speed-up
over APG was more than 20X, i.e., the runtime reduced from 16557s to 792s.
Original frame
APG
(342.5s)
5% sampled
(24.2s)
0.5% sampled
(5.2s)
Figure 3: Sample ?Hall? recovery by APG, DFC-P ROJ -E NS-5%, and DFC-P ROJ -E NS-.5%.
4 Theoretical Analysis
Having investigated the empirical advantages of DFC, we now show that DFC admits highprobability recovery guarantees comparable to those of its base algorithm.
4.1 Matrix Coherence
Since not all matrices can be recovered from missing entries or gross outliers, recent theoretical
advances have studied sufficient conditions for accurate noisy MC [3, 12, 20] and RMF [1, 25].
Most prevalent among these are matrix coherence conditions, which limit the extent to which the
singular vectors of a matrix are correlated with the standard basis. Letting ei be the ith column of
the standard basis, we define two standard notions of coherence [22]:
Definition 1 (?0 -Coherence). Let V ? Rn?r contain orthonormal columns with r ? n. Then the
?0 -coherence of V is:
?0 (V) !
n
r
max1?i?n $PV ei $2 =
n
r
max1?i?n $V(i) $2 .
Definition 2 (?1 -Coherence). Let L ? Rm?n have rank r. Then, the ?1 -coherence of L is:
!
#
#
?1 (L) ! mn
r maxij |ei UL VL ej | .
For any ? > 0, we
?will call a matrix L (?, r)-coherent if rank(L) = r, max(?0 (UL ), ?0 (VL )) ?
?, and ?1 (L) ? ?. Our analysis will focus on base MC and RMF algorithms that express their
recovery guarantees in terms of the (?, r)-coherence of the target low-rank matrix L0 . For such
algorithms, lower values of ? correspond to better recovery properties.
4.2
DFC Master Theorem
We now show that the same coherence conditions that allow for accurate MC and RMF also imply
high-probability recovery for DFC. To make this precise, we let M = L0 + S0 + Z0 ? Rm?n ,
where L0 is (?, r)-coherent and $P? (Z0 )$F ? ?. We further fix any !, ? ? (0, 1] and define A(X)
r?2
, r)-coherent. Then, our Thm. 3 provides a generic recovery
as the event that a matrix X is ( 1?!/2
bound for DFC when used in combination with an arbitrary base algorithm. The proof requires a
novel, coherence-based analysis of column projection and random column sampling. These results
of independent interest are presented in Appendix A.
6
Theorem 3. Choose t = n/l and l ? cr? log(n) log(2/?)/"2 , where c is a fixed positive constant,
! and fix any ce ?? 0. Under the "notation of Algorithm 1, if a base MF algorithm yields
? i " > ce ml? | A(C0,i ) ? ?C for each i, where C0,i is the corresponding partiP "C0,i ? C
F
tion of L0 , then, with probability at least (1 ? ?)(1 ? t?C ), DFC-P ROJ guarantees
?
? proj " ? (2 + ")ce mn?.
"L0 ? L
F
"
!
?
?
? ?C
Under Algorithm 2, if a base MF algorithm yields P "C0 ? C"
F > ce ml? | A(C)
!
"
?
2
?
?
and P "R0 ? R"
F > ce dn? | A(R) ? ?R for d ? cl?0 (C) log(m) log(1/?)/" , then, with
probability at least (1 ? ?)2 (1 ? ?C ? ?R ), DFC-N YS guarantees
?
? nys " ? (2 + 3")ce ml + dn?.
"L0 ? L
F
To understand the conclusions of Thm. 3, consider a typical base algorithm which, when applied to
?
? satisfying "L0 ? L"
?
P? (M), recovers an estimate L
F ? ce mn? with high probability. Thm. 3
asserts that, with appropriately reduced probability, DFC-P ROJ exhibits the same recovery error
scaled by an adjustable factor of 2 + ", while DFC-N YS exhibits a somewhat smaller error scaled by
2+3".5 The key take-away then is that DFC introduces a controlled increase in error and a controlled
decrement in the probability of success, allowing the user to interpolate between maximum speed
and maximum accuracy. Thus, DFC can quickly provide near-optimal recovery in the noisy setting
and exact recovery in the noiseless setting (? = 0), even when entries are missing or grossly
corrupted. The next two sections demonstrate how Thm. 3 can be applied to derive specific DFC
recovery guarantees for noisy MC and noisy RMF. In these sections, we let n
? ! max(m, n).
4.3 Consequences for Noisy MC
Our first corollary of Thm. 3 shows that DFC retains the high-probability recovery guarantees of a
standard MC solver while operating on matrices of much smaller dimension. Suppose that a base
MC algorithm solves the following convex optimization problem, studied in [3]:
minimizeL
"L"?
subject to "P? (M ? L)"F ? ?.
Then, Cor. 4 follows from a novel guarantee for noisy convex MC, proved in the appendix.
Corollary 4. Suppose that L0 is (?, r)-coherent and that s entries of M are observed, with locations
? distributed uniformly. Define the oversampling parameter
?s !
s(1 ? "/2)
,
+ n) log2 (m + n)
32?2 r2 (m
and fix any target rate parameter 1 < ? ? ?s . Then, if "P? (M) ? P? (L0 )"F ? ? a.s., it suffices
to choose t = n/l and
%
#
%
#
$
$
log(n) log(2/?)
log(m) log(1/?)
n(??1)
m(??1)
m?
n?
?
, cr?
, d ? max ?s +
, cl?0 (C)
l ? max ?s +
?s
#2
?s
#2
to achieve
? proj " ? (2 + ")c# ?mn?
DFC-P ROJ: "L0 ? L
e
F
?
? nys " ? (2 + 3")c#e ml + dn?
DFC-N YS: "L0 ? L
F
with probability at least
DFC-P ROJ: (1 ? ?)(1 ? 5t log(?
n)?
n2?2? ) ? (1 ? ?)(1 ? n
? 3?2? )
DFC-N YS: (1 ? ?)2 (1 ? 10 log(?
n)?
n2?2? ),
respectively, with c as in Thm. 3 and c#e a positive constant.
5
?
Note that the DFC-N YS guarantee requires the number of rows sampled to grow in proportion to ?0 (C),
a quantity always bounded by ? in our simulations.
7
Notably, Cor. 4 allows for the fraction of columns and rows sampled to decrease as the oversampling
parameter ?s increases with m and n. In the best case, ?s = ?(mn/[(m + n) log2 (m + n)]), and
2
n
log2 (m + n)) sampled columns and O( m
Cor. 4 requires only O( m
n log (m + n)) sampled rows. In
the worst case, ?s = ?(1), and Cor. 4 requires the number of sampled columns and rows to grow
linearly with the matrix
? dimensions. As a more realistic intermediate scenario, consider the setting
in which??s = ?( m + n) and thus a vanishing fraction of entries are revealed. In this setting,
only O( m + n) columns and rows are required by Cor. 4.
4.4 Consequences for Noisy RMF
Our next corollary shows that DFC retains the high-probability recovery guarantees of a standard
RMF solver while operating on matrices of much smaller dimension. Suppose that a base RMF
algorithm solves the following convex optimization problem, studied in [25]:
minimizeL,S "L"? + ?"S"1 subject to "M ? L ? S"F ? ?,
?
with ? = 1/ n
? . Then, Cor. 5 follows from Thm. 3 and the noisy RMF guarantee of [25, Thm. 2].
Corollary 5. Suppose that L0 is (?, r)-coherent and that the uniformly distributed support set of
S0 has cardinality s. For a fixed positive constant ?s , define the undersampling parameter
!
s "
?s ! 1 ?
/?s ,
mn
and fix any target rate parameter ? > 2 with rescaling ? " ! ? log(?
n)/ log(m) satisfying 4?s ?
3/?s ? ? " ? ?s . Then, if "M ? L0 ? S0 "F ? ? a.s., it suffices to choose t = n/l and
$
# 2 2
n)?(1 ? ?s ?s )
r ? log2 (?
n) 4 log(?
2
,
,
cr?
log(n)
log(2/?)/$
l ? max
(1 ? $/2)?r
m(?s ?s ? ?s ? " )2
# 2 2
$
r ? log2 (?
n) 4 log(?
n)?(1 ? ?s ?s )
2
?
d ? max
,
,
cl?
(
C)
log(m)
log(1/?)/$
0
(1 ? $/2)?r
n(?s ?s ? ?s ? " )2
to have
? proj " ? (2 + $)c""e ?mn?
DFC-P ROJ: "L0 ? L
F
?
? nys " ? (2 + 3$)c"" ml + dn?
DFC-N YS: "L0 ? L
e
F
with probability at least
DFC-P ROJ: (1 ? ?)(1 ? tcp n
? ?? ) ? (1 ? ?)(1 ? cp n
? 1?? )
DFC-N YS: (1 ? ?)2 (1 ? 2cp n
? ?? ),
respectively, with c as in Thm. 3 and ?r , c""e , and cp positive constants.
Note that Cor. 5 places only very mild restrictions on the number of columns and rows to be sampled.
Indeed, l and d need only grow poly-logarithmically in the matrix dimensions to achieve highprobability noisy recovery.
5 Conclusions
To improve the scalability of existing matrix factorization algorithms while leveraging the ubiquity
of parallel computing architectures, we introduced, evaluated, and analyzed DFC, a divide-andconquer framework for noisy matrix factorization with missing entries or outliers. We note that the
contemporaneous work of [19] addresses the computational burden of noiseless RMF by reformulating a standard convex optimization problem to internally incorporate random projections. The
differences between DFC and the approach of [19] highlight some of the main advantages of this
work: i) DFC can be used in combination with any underlying MF algorithm, ii) DFC is trivially
parallelized, and iii) DFC provably maintains the recovery guarantees of its base algorithm, even in
the presence of noise.
8
References
[1] A. Agarwal, S. Negahban, and M. J. Wainwright. Noisy matrix decomposition via convex relaxation:
Optimal rates in high dimensions. In International Conference on Machine Learning, 2011.
[2] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of the ACM, 58
(3):1?37, 2011.
[3] E.J. Cand`es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925 ?936, 2010.
[4] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Sparse and low-rank matrix decompositions. In Allerton Conference on Communication, Control, and Computing, 2009.
[5] Y. Chen, H. Xu, C. Caramanis, and S. Sanghavi. Robust matrix completion and corrupted columns. In
International Conference on Machine Learning, 2011.
[6] P. Drineas, M. W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decompositions. SIAM
Journal on Matrix Analysis and Applications, 30:844?881, 2008.
[7] A. Frieze, R. Kannan, and S. Vempala. Fast Monte-Carlo algorithms for finding low-rank approximations.
In Foundations of Computer Science, 1998.
[8] S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin. A theory of pseudoskeleton approximations.
Linear Algebra and its Applications, 261(1-3):1 ? 21, 1997.
[9] D. Gross and V. Nesme. Note on sampling without replacing from a finite collection of matrices. CoRR,
abs/1001.2738, 2010.
[10] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American
Statistical Association, 58(301):13?30, 1963.
[11] D. Hsu, S. M. Kakade, and T. Zhang. Dimension-free tail inequalities for sums of random matrices.
arXiv:1104.1672v3[math.PR], 2011.
[12] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. Journal of Machine
Learning Research, 99:2057?2078, 2010.
[13] S. Kumar, M. Mohri, and A. Talwalkar. On sampling-based approximate spectral decomposition. In
International Conference on Machine Learning, 2009.
[14] S. Kumar, M. Mohri, and A. Talwalkar. Ensemble Nystr?om method. In NIPS, 2009.
[15] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms for exact
recovery of a corrupted low-rank matrix. UIUC Technical Report UILU-ENG-09-2214, 2009.
[16] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Mathematical Programming, 128(1-2):321?353, 2011.
[17] K. Min, Z. Zhang, J. Wright, and Y. Ma. Decomposing background topics from keywords by principal
component pursuit. In Conference on Information and Knowledge Management, 2010.
[18] M. Mohri and A. Talwalkar. Can matrix coherence be efficiently and accurately estimated? In Conference
on Artificial Intelligence and Statistics, 2011.
[19] Y. Mu, J. Dong, X. Yuan, and S. Yan. Accelerated low-rank visual recovery by random projection. In
Conference on Computer Vision and Pattern Recognition, 2011.
[20] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal
bounds with noise. arXiv:1009.2118v2[cs.IT], 2010.
[21] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma. Rasl: Robust alignment by sparse and low-rank
decomposition for linearly correlated images. In Conference on Computer Vision and Pattern Recognition,
2010.
[22] B. Recht. A simpler approach to matrix completion. arXiv:0910.0651v2[cs.IT], 2009.
[23] K. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized least squares
problems. Pacific Journal of Optimization, 6(3):615?640, 2010.
[24] C.K. Williams and M. Seeger. Using the Nystr?om method to speed up kernel machines. In NIPS, 2000.
[25] Z. Zhou, X. Li, J. Wright, E. J. Cand`es, and Y. Ma. Stable principal component pursuit. arXiv:
1001.2363v1[cs.IT], 2010.
9
| 4486 |@word mild:1 trial:1 norm:3 proportion:1 c0:4 simulation:3 decomposition:7 eng:1 attainable:1 nystr:8 contains:2 document:1 amp:2 outperforms:1 existing:1 recovered:4 com:1 toh:1 attracted:1 realistic:1 partition:8 plot:1 intelligence:1 item:1 ith:2 prize:1 core:1 vanishing:1 tcp:1 provides:2 detecting:1 math:1 location:4 allerton:1 org:1 simpler:1 zhang:2 five:1 mathematical:1 along:1 dn:4 yuan:1 combine:9 introduce:1 peng:1 notably:2 indeed:2 cand:3 growing:1 uiuc:1 ol:3 solver:2 cardinality:1 begin:2 project:1 moreover:3 notation:3 bounded:2 underlying:1 interpreted:1 developed:1 unobserved:1 finding:1 guarantee:15 berkeley:2 runtime:2 um:3 rm:5 scaled:2 lester:1 unit:1 internally:1 control:1 positive:5 negligible:1 engineering:1 timing:2 limit:2 consequence:2 abuse:1 studied:5 ease:1 factorization:24 averaged:1 practical:1 practice:2 empirical:2 yan:1 submatrices:7 significantly:1 projection:10 ups:1 onto:5 superlinear:2 selection:1 www:2 restriction:1 missing:3 williams:1 independently:2 convex:8 focused:2 recovery:27 factored:3 nuclear:2 orthonormal:1 oh:1 notion:1 variation:2 imagine:1 target:3 suppose:4 user:2 exact:2 programming:1 us:2 logarithmically:1 expensive:1 satisfying:2 recognition:2 observed:4 subproblem:3 electrical:1 zamarashkin:1 worst:1 svds:1 decrease:1 gross:2 mu:1 complexity:2 convexity:1 solving:1 algebra:1 upon:1 max1:2 basis:2 drineas:1 caramanis:1 muthukrishnan:1 train:1 fast:2 monte:1 artificial:1 netflixprize:1 quite:1 valued:1 otherwise:1 reconstruct:1 statistic:2 noisy:29 final:3 advantage:2 took:1 propose:2 reconstruction:1 product:1 remainder:1 combining:1 ramification:1 ystr:1 achieve:2 frobenius:1 asserts:1 scalability:3 spsd:1 asymmetry:1 object:1 tions:1 illustrate:2 derive:1 completion:11 measured:1 ij:2 keywords:1 solves:4 strong:2 recovering:2 c:3 fix:4 suffices:2 mm:1 hall:3 wright:5 grouplens:1 largest:1 weighted:1 minimization:1 always:2 super:1 rather:2 zhou:1 ej:1 cr:3 surveillance:2 varying:3 corollary:4 l0:23 focus:3 derived:1 consistently:1 rank:27 prevalent:2 seeger:1 rmf:23 baseline:1 talwalkar:3 vl:2 kc:1 proj:16 selects:1 provably:1 pixel:1 tyrtyshnikov:1 among:1 plan:1 art:2 mak:1 uc:4 having:1 sampling:7 identical:1 represents:1 nearly:3 foreground:1 report:3 sanghavi:2 inherent:1 roj:33 modern:1 randomly:2 frieze:2 interpolate:1 cheaper:1 replacement:4 maintain:1 ab:2 interest:3 evaluation:1 alignment:2 mahoney:1 introduces:2 analyzed:1 semidefinite:1 accurate:3 poorer:1 bregman:1 orthogonal:2 incomplete:1 divide:15 theoretical:4 minimal:1 mk:3 instance:2 column:33 modeling:5 retains:2 ijth:1 cost:3 introducing:1 uilu:1 subset:2 entry:22 uniform:1 reported:1 straightforwardly:1 corrupted:4 proximal:2 recht:1 international:3 randomized:5 negahban:2 siam:1 vm:3 dong:1 michael:1 quickly:1 management:1 choose:4 hoeffding:1 admit:1 american:1 rescaling:1 li:2 parrilo:1 dfc:95 summarized:2 availability:1 includes:1 performed:1 root:1 tion:1 portion:2 netflix:2 recover:2 maintains:1 parallel:17 rmse:7 collaborative:8 om:9 formed:1 publicly:1 accuracy:2 square:3 variance:1 efficiently:2 ensemble:4 correspond:1 yield:2 accurately:1 comparably:1 mc:19 carlo:1 cc:1 corruption:2 definition:2 against:1 grossly:1 associated:1 proof:1 recovers:1 cur:1 sampled:12 hsu:1 dataset:2 proved:1 knowledge:1 organized:1 formulation:1 evaluated:2 just:1 hand:1 web:1 ei:3 replacing:1 keshavan:1 ganesh:2 contain:1 hence:2 reformulating:1 symmetric:1 moore:1 goldfarb:1 generalized:6 yun:1 demonstrate:2 performs:4 cp:3 image:2 novel:3 superior:1 association:1 tail:1 significant:3 refer:1 framed:1 enjoyed:1 trivially:1 pm:1 similarly:1 moving:1 stable:1 operating:2 base:28 recent:1 irrelevant:1 scenario:1 store:1 inequality:2 success:1 entitled:1 somewhat:1 r0:1 parallelized:1 aggregated:1 v3:1 ii:1 full:2 reduces:1 infer:1 technical:1 match:1 lin:1 y:23 controlled:2 ensuring:1 prediction:2 scalable:1 involving:1 variant:1 noiseless:2 vision:2 arxiv:4 iteration:3 represent:2 kernel:1 agarwal:1 c1:5 background:9 singular:4 grow:3 appropriately:1 rest:1 exhibited:1 subject:2 leveraging:2 mod:1 effectiveness:2 call:1 near:4 leverage:1 presence:1 revealed:5 split:2 identically:2 intermediate:1 iii:1 variety:2 architecture:3 click:1 reduce:1 ul:2 returned:1 demonstrably:1 lproj:2 generate:3 http:2 outperform:1 percentage:2 reduced:3 canonical:1 oversampling:2 estimated:1 per:2 write:1 express:1 kci:1 key:1 drawn:1 undersampling:1 ce:7 uw:2 v1:1 relaxation:2 fraction:3 sum:2 run:1 master:1 place:1 chandrasekaran:1 wu:1 coherence:11 appendix:2 comparable:4 submatrix:10 apg:15 ct:4 bound:2 activity:1 generates:1 speed:12 min:2 kumar:2 ameet:1 vempala:1 speedup:1 department:2 pacific:1 combination:2 smaller:7 across:1 partitioned:1 kakade:1 outlier:10 projecting:2 explained:1 pr:1 restricted:1 letting:1 end:2 cor:7 available:2 decomposing:1 pursuit:2 apply:1 observe:2 away:1 spectral:3 generic:1 ubiquity:1 v2:2 save:1 robustness:1 original:2 running:4 include:1 remaining:1 graphical:1 log2:5 conquer:4 establish:1 quantity:1 costly:1 diagonal:1 exhibit:2 gradient:2 cw:1 link:1 simulated:3 topic:1 extent:1 dfcp:2 dataset4:1 kannan:1 willsky:1 retained:1 goreinov:1 providing:1 lg:4 unfortunately:1 executed:1 subproblems:4 unknown:1 adjustable:1 allowing:1 recommender:2 observation:3 snapshot:1 datasets:2 withheld:1 finite:1 truncated:2 defining:1 communication:1 precise:1 frame:5 rn:2 arbitrary:8 thm:9 community:1 intensity:1 rating:3 introduced:2 pair:1 required:1 omits:1 coherent:5 nip:2 address:1 suggested:1 parallelism:2 pattern:2 sparsity:1 summarize:1 nesme:1 max:7 video:9 maxij:1 wainwright:2 overlap:2 event:1 rely:2 regularized:1 largescale:1 mn:7 scheme:1 improve:3 imply:1 lk:4 created:1 multiplication:1 relative:4 highlight:4 filtering:8 proportional:1 rmses:1 minimizel:2 foundation:1 sufficient:1 s0:9 rasl:1 row:19 mohri:3 free:1 enjoys:1 allow:1 highprobability:2 understand:1 sparse:4 distributed:4 curve:1 default:1 dimension:6 world:2 collection:1 social:1 contemporaneous:1 approximate:1 compact:1 lobby:2 ml:5 pseudoinverse:1 search:1 iterative:1 table:2 robust:7 pseudoskeleton:1 inherently:2 investigated:1 cl:3 poly:1 dense:1 main:1 linearly:2 decrement:1 montanari:1 noise:7 arise:1 n2:2 repeated:1 xu:2 ensembling:2 en:11 ny:13 n:20 pv:1 z0:7 remained:1 theorem:2 specific:2 maxi:1 explored:2 dk:1 admits:1 r2:1 burden:1 sequential:1 corr:1 kr:1 ci:3 magnitude:3 illumination:2 gap:1 easier:1 mf:15 chen:3 intersection:1 explore:1 penrose:1 visual:1 contained:1 trix:1 mij:2 acm:1 ma:6 goal:3 change:2 movielens:2 typical:1 uniformly:10 andconquer:2 principal:3 experimental:2 svd:2 m3:1 e:3 support:3 accelerated:3 incorporate:1 evaluate:2 correlated:2 |
3,851 | 4,487 | Contextual Gaussian Process Bandit Optimization
Andreas Krause
Cheng Soon Ong
Department of Computer Science, ETH Zurich,
8092 Zurich, Switzerland
[email protected]
[email protected]
Abstract
How should we design experiments to maximize performance of a complex
system, taking into account uncontrollable environmental conditions? How
should we select relevant documents (ads) to display, given information about the
user? These tasks can be formalized as contextual bandit problems, where at each
round, we receive context (about the experimental conditions, the query), and
have to choose an action (parameters, documents). The key challenge is to trade
off exploration by gathering data for estimating the mean payoff function over the
context-action space, and to exploit by choosing an action deemed optimal based
on the gathered data. We model the payoff function as a sample from a Gaussian
process defined over the joint context-action space, and develop CGP-UCB, an
intuitive upper-confidence style algorithm. We show that by mixing and matching
kernels for contexts and actions, CGP-UCB can handle a variety of practical applications. We further provide generic tools for deriving regret bounds when using
such composite kernel functions. Lastly, we evaluate our algorithm on two case
studies, in the context of automated vaccine design and sensor management. We
show that context-sensitive optimization outperforms no or naive use of context.
1
Introduction
Consider the problem of learning to optimize a complex system subject to varying environmental
conditions. Or learning to retrieve relevant documents (ads), given context about the user. Or learning to solve a sequence of related optimization and search tasks, by taking into account experience
with tasks solved previously. All these problems can be phrased as a contextual bandit problem (c.f.,
[1, 2], we review related work in Section 7), where in each round, we receive context (about the
experimental conditions, the query, or the task), and have to choose an action (system parameters,
document to retrieve). We then receive noisy feedback about the obtained payoff. The key challenge
is to trade off exploration by gathering data for estimating the mean payoff function over the contextaction space, and to exploit by choosing an action deemed optimal based on the gathered data.
Without making any assumptions about the class of payoff functions under consideration, we
cannot expect to do well. A natural approach is to choose a regularizer, encoding assumptions
about smoothness of the payoff function. In this paper, we take a nonparametric approach, and
model the payoff function as a sample from a Gaussian process defined over the joint context-action
space (or having low norm in the associated RKHS). This approach allows us to estimate the
predictive uncertainty in the payoff function estimated from previous experiments, guiding the
tradeoff between exploration and exploitation. In the context-free case, this problem is studied
by [3], who analyze GP-UCB, an upper-confidence bound-based sampling algorithm that makes
use of the predictive uncertainty to trade exploration and exploitation. In this paper, we develop
CGP-UCB, a natural generalization of GP-UCB, which takes context information into account.
By constructing a composite kernel function for the regularizer from kernels defined over the action
and context spaces (e.g., a linear kernel on the actions, and Gaussian kernel on the contexts), we can
capture several natural contextual bandit problem formulations. We prove that CGP-UCB incurs
1
sublinear contextual regret (i.e., prove that it competes with the optimal mapping from context
to actions) for a large class of composite kernel functions constructed in this manner. Lastly, we
evaluate our algorithm on two real-world case studies in the context of automated vaccine design,
and management of sensor networks. We show that in both these problems, properly taking into
account contextual information outperforms ignoring or naively using context.
In summary, as our main contributions we
? develop an efficient algorithm, CGP-UCB, for the contextual GP bandit problem;
? show that by flexibly combining kernels over contexts and actions, CGP-UCB can be
applied to a variety of applications;
? provide a generic approach for deriving regret bounds for composite kernel functions;
? evaluate CGP-UCB on two case studies, related to automated vaccine design and sensor
management.
2
Modeling Contextual Bandits with Gaussian Processes
We consider playing a game for a sequence of T (not necessarily known a priori) rounds. In each
round, we receive a context zt ? Z from a (not necessarily finite) set Z of contexts, and have to
choose an action st ? S from a (not necessarily finite) set S of actions. We then receive a payoff
yt = f (st , zt ) + t , where f : S ? Z ? R is an (unknown) function, and t is zero mean random
noise (independent across the rounds). The addition of (externally chosen) contextual information
captures a critical component in many applications, and generalizes the k-armed bandit setting.
Since f is unknown, we will not generally be able to choose the optimal action, and thus incur
PT
regret rt = sups0 ?S f (s0 , zt ) ? f (st , zt ). After T rounds, our cumulative regret is RT = t=1 rt .
The context-specific best action is a more demanding benchmark than the best action used in the
(context-free) definition of regret. Our goal will be to develop an algorithm which achieves sublinear
contextual regret, i.e., RT /T ? 0 for T ? ?. Note that achieving sublinear contextual regret
requires learning (and competing with) the optimal mapping from contexts to actions.
Regularity assumptions are required, since without any there could be a single action s? ? S that
obtains payoff of 1, and all other actions obtain payoff 0. With infinite action sets, no algorithm will
be able to identify s? in finite time. In this paper, we assume that the function f : S ? Z ? R
is a sample from a known Gaussian process (GP) distribution1 . A Gaussian process is a collection
of dependent random variables, one for each x ? X, such that every finite marginal distribution
is a multivariate Gaussian (while ensuring overall consistency) [4]. Here we use X = S ? Z
to refer to the set of all action-context pairs. A GP (?, k) is fully specified by its mean function
? : X ? R, ?(x) = E[f (x)] and covariance (or kernel) function k : X ? X ? R, k(x, x0 ) =
E[(f (x) ? ?(x))(f (x0 ) ? ?(x0 ))]. Without loss of generality [4], we assume that ? ? 0. We further
assume bounded variance by restricting k(x, x) ? 1, for all x ? X. The covariance function k
encodes smoothness properties of sample functions f drawn from the GP. Since the random variables
are action-context pairs, often there is a natural decomposition of the covariance function k into the
corresponding covariance functions on actions and contexts (Section 5).
A major computational benefit of working with GPs is the fact that posterior inference can be
performed in closed form. Suppose we have collected observations yT = [y1 . . . yT ]T at inputs
AT = {x1 , . . . , xT }, yt = f (xt ) + t with i.i.d. Gaussian noise t ? N (0, ? 2 ), the posterior
distribution over f is a GP with mean ?T (x), covariance kT (x, x0 ) and variance ?T2 (x), with
parameters estimated as
?T (x) = kT (x)T (K T + ? 2 I)?1 y T ,
kT (x, x0 ) = k(x, x0 ) ? kT (x)T (K T + ? 2 I)?1 kT (x0 ),
?T2 (x) = kT (x, x),
where kT (x) = [k(x1 , x) . . . k(xT , x)]T and K T is the (positive semi-definite) kernel matrix
[k(x, x0 )]x,x0 ?AT . The choice of the kernel function turns out to be crucial in regularizing the
function class to achieve sublinear regret (Section 4).
1
We will also consider the case where f has low norm in the RKHS associated with the covariance k.
2
3
The Contextual Upper Confidence Bound Algorithm
In the context-free case Z = ?, the problem of trading off exploration and exploitation with payoff
functions sampled from a Gaussian process is studied by [3]. They show that a simple upper confidence bound algorithm, GP-UCB (Equation 1), achieves sublinear regret. At round t, GP-UCB
picks action st = xt such that
1/2
st = argmax ?t?1 (s) + ?t
?t?1 (s),
(1)
s?S
where ?t are appropriate constants. Here ?t?1 (?) and ?t?1 (?) are the posterior mean and standard deviation conditioned on the observations (s1 , y1 ), . . . , (st?1 , yt?1 ). This GP-UCB objective
naturally trades off exploration (picking actions with uncertain outcomes, i.e., large ?t?1 (s)), and
exploitation (picking actions expected to do well, i.e., having large ?t?1 (s)).
We propose a natural generalization of GP-UCB, which incorporates contextual information
1/2
st = argmax ?t?1 (s, zt ) + ?t
?t?1 (s, zt ),
(2)
s?S
where ?t?1 (?) and ?t?1 (?) are the posterior mean and standard deviation of the GP over the joint
set X = S ? Z conditioned on the observations (s1 , z1 , y1 ), . . . , (st?1 , zt?1 , yt?1 ). Thus, when
presented with context zt , this algorithm uses posterior inference to predict mean and variance for
each possible decision s, conditioned on all past observations (involving both the chosen actions, the
observed contexts as well as the noisy payoffs). We call the greedy algorithm implementing rule 2
the contextual Gaussian process UCB algorithm (CGP-UCB). As we will show in Section 5, this
algorithm allows to incorporate various assumptions about the dependencies of the payoff function
on the chosen actions and observed contexts. It also allows us to generalize several approaches
proposed in the literature [3, 5, 6]. In the following, we will prove that in many practical applications,
CGP-UCB attains sublinear contextual regret (i.e., is able to compete with the optimal mapping
from contexts to actions).
4
Bounds on the Contextual Regret
Bounding the contextual regret of CGP-UCB is a challenging problem, since the regret is measured
with respect to the best action for each context. Intuitively, the amount of regret we incur should
depend on how quickly we can gather information about the payoff function, which now jointly
depends on context and actions. In the following, we show that the contextual regret of CGP-UCB
is bounded by an intuitive information-theoretic quantity, which quantifies the mutual information
between the observed context-action pairs and the estimated payoff function f .
We start by reviewing the special case of [3] where no context information is provided. It is
shown
? that in this context-free case, the regret RT of the GP-UCB algorithm can be bounded as
O? ( T ?T ), where ?T is defined as:
?T :=
max
A?S:|A|=T
I(y A ; f ),
where I(y A ; f ) = H(y A ) ? H(y A |f ) quantifies the reduction in uncertainty (measured in terms of
differential Shannon entropy [7]) about f achieved by revealing y A . In the multivariate Gaussian
case, the entropy can be computed in closed form: H(N (?, ?)) = 12 log |2?e?|, so that I(y A ; f ) =
1
?2
K A |, where K A = [k(s, s0 )]s,s0 ?A is the Gram matrix of k evaluated on set A ? S.
2 log |I + ?
For the contextual case, our regret bound comes also in terms of the quantity ?T , redefined so that the
information gain I(y A ; f ) now depends on the observations y A = [y(x)]x?A of the joint contextaction pairs x = (s, z), and f : S ? Z ? R is the payoff function over the context-action space.
Consequently, the kernel matrix K A = [k(x, x0 )]x,x0 ?A is defined over context-action pairs. Using
this notion of information gain ?T , we lift the results of [3] to the much more general contextual
bandit setting, shedding further light on the connection between bandit optimization and information
gain. In Section 5, we show how to bound ?T for composite kernels, combining possibly different
assumptions about the regularity of f in the action space S and context space Z.
We consider the same three settings as analyzed in [3]. Note that none of the results subsume each
other, and so all cases may be of use. For the first two settings, we assume a known GP prior and (1)
a finite X and (2) infinite X with mild assumptions about k. A third (and perhaps more ?agnostic?)
way to express assumptions about f is to require that f has low ?complexity? as quantified in terms
of the Reproducing Kernel Hilbert Space (RKHS, [8]) norm associated with kernel k.
3
Theorem 1 Let ? ? (0, 1). Suppose one of the following assumptions holds
1. X is finite, f is sampled from a known GP prior with known noise variance ? 2 , and ?t =
2 log(|X|t2 ? 2 /6?)
2. X ? [0, r]d is compact and convex, d ? N, r > 0. Suppose f is sampled from a known
GP prior with known noise variance ? 2 , and that k(x, x0 ) satisfies the following high
probability bound on the derivatives of GP sample paths f : for some constants a, b > 0,
2
Pr {supx?X |?f /?xj | > L} ? ae?(L/b) , j = 1, . . . , d.
p
Choose ?t = 2 log(t2 2? 2 /(3?)) + 2d log t2 dbr log(4da/?) .
3. X is arbitrary; ||f ||k ? B. The noise variables t form an arbitrary martingale difference
sequence (meaning that E[?t | ?1 , . . . , ?t?1 ] = 0 for all t ? N), uniformly bounded by ?.
Further define ?t = 2B 2 + 300?t ln3 (t/?).
?
Then the contextual regret of CGP-UCB is bounded by O? ( T ?T ?T ) w.h.p. Precisely,
n
o
p
Pr RT ? C1 T ?T ?T + 2 ?T ? 1 ? 1 ? ?.
where C1 = 8/ log(1 + ? ?2 ).
Theorem 1 (proof given in the supplemental material) shows that, in case (1) and (2), with high
probability over samples from the GP, the cumulative contextual regret is bounded in terms of the
maximum information gain with respect to the GP defined over S ? Z. In case of assumption (3),
a regret bound is obtained in a more agnostic setting, where no prior on f is assumed, and much
weaker assumptions are made about the noise process. Note that case (3) requires a bound B on
||f ||k . If no such bound is available, standard guess-and-doubling arguments can be used.
5
Applications of CGP-UCB
By choosing different kernel functions k : X ?X ? R, the CGP-UCB algorithm can be applied to a
variety of different applications. A natural approach is to start with kernel functions kZ : Z ?Z ? R
and kS : S ? S ? R on the space of contexts and actions, and use them to derive the kernel on the
product space.
5.1 Constructing Composite Kernels
One possibility is to consider a product kernel k = kS ? kZ , by setting (kS ? kZ )((s, z), (s0 , z0 )) =
kZ (z, z0 )kS (s, s0 ). The intuition behind this product kernel is a conjunction of the notions of similarities induced by the kernels over context and action spaces: Two context-action pairs are similar
(large correlation) if the contexts are similar and actions are similar (Figure 1(a)). Note that many
kernel functions used in practice are already in product form. For example, if kZ and kS are squared
exponential kernels (or Mat?ern kernels with smoothness parameters ?), then the product k = kZ ?kS
is a squared exponential kernel (or Mat?ern kernels with smoothness parameters ?). Similarly, if kS
8
6
3
4
2
1
Payoffs
Payoffs
2
0
?2
?4
?6
0
?1
?2
?8
?3
1
?10
1
0.5
0.5
1
0
0
?0.5
?0.5
?1
Contexts
1
0.5
0
0.5
0
?0.5
?1
Contexts
Actions
(a)
?0.5
?1
?1
Actions
(b)
Figure 1: Illustrations of composite kernel functions that can be incorporated into CGP-UCB. (a) Product of
squared exponential kernel and linear kernel; (b) additive combination of a payoff function that smoothly depends on context, and exhibits clusters of actions. In general, context and action spaces are higher dimensional.
4
and kZ have finite rank mS and mZ (i.e., all kernel matrices over finite sets have rank at most mS
and mZ respectively), then kS ? kZ has finite rank mS mZ . However, other kernel functions can be
naturally combined as well.
An alternative is to consider the additive combination (kS ? kZ )((s, z), (s0 , z0 )) = kZ (z, z0 ) +
kS (s, s0 ) which is positive definite as well. The intuition behind this construction is that a GP with
additive kernel can be understood as a generative model, which first samples a function fS (s, z) that
is constant along z, and varies along s with regularity as expressed by ks ; it then samples a function
fz (s, z), which varies along z and is constant along s; then f = fs + fz . Thus, the fz component
models overall trends according to the context (e.g., encoding assumptions about similarity within
clusters of contexts), and the fS models action-specific deviation from this trend (Figure 1(b)). In
Section 5.3, we provide examples of applications that can be captured in this framework.
5.2
Bounding the Information Gain for Composite Kernels.
Since the key quantity governing the regret is the information gain ?T , we would like to find a
convenient way of bounding ?T for composite kernels (kS ? kZ and kS ? kZ ), plugging in different
regularity assumptions for the contexts (via kZ ) and actions (via kS ). More formally, let us define
1
?(T ; k; V ) = max
logI + ? ?2 [k(v, v0 )]v,v0 ?A ,
A?V,|A|?T 2
which quantifies the maximum possible information gain achievable by sampling T points in a GP
defined over set V with kernel function k. In [3, Theorem 5], bounds on ?(T ; k; V ) were derived
for common kernel functions including the linear (?(T ; k; V ) = O(d log T ) for d-dimensions),
the squared exponential (?(T ; k; V ) = O((log T )d+1 )) and Mat?ern kernels (?(T ; k; V ) =
O(T d(d+1)/(2?+d(d+1)) log T ) for smoothness parameter ?).
In the following, we show how ?(T ; k; V ) can be bounded for composite kernels of the form kS ?kZ
and kS ? kZ , dependent on ?(T ; kS ; S) and ?(T ; kZ ; Z).
Theorem 2 Let kZ be a kernel function on Z with rank at most d (i.e., all Gram matrices over
arbitrary finite sets of points A ? Z have rank at most d). Then
?(T ; kS ? kZ ; X) ? d?(T ; kS ; S) + d log T.
The assumptions of Theorem 2 are satisfied, for example, if |Z| < ? and rk KZ = d, or if kZ is a
d-dimensional linear kernel on Z ? Rd . Theorem 2 also holds with the roles of kZ and kS reversed.
Theorem 3 Let kS and kZ be kernel functions on S and Z respectively. Then for the additive
combination k = kS ? kZ defined on X it holds that
?(T ; kS ? kZ ; X) ? ?(T ; kS ; S) + ?(T ; kZ ; Z) + 2 log T.
Proofs of Theorems 2 and 3 are given in the supplemental material. By combining the results above
with the information gain bounds of [3], we can immediately obtain that, e.g., ?T for the product of
a d1 dimensional linear kernel and a d2 dimensional Gaussian kernel is O(d1 (log T )d2 +1 ).
5.3
Example applications.
We now illustrate the generality of the CGP-UCB approach, by fleshing out four possible applications. In Section 6, we experimentally evaluate CGP-UCB on two of these applications.
Online advertising and news recommendation. Suppose an online service would like to display
query-specific ads. This is the textbook contextual bandit problem [9]. There are |S| = m different
ads to select from, and each round we receive, for each ad s ? S, a feature vector zs . Thus, the
complete context is z = [z1 , . . . , zm ]. [9] model the expected payoff for each action as a (unknown)
linear function ?(s, z) = zTs ?s? . Hereby, ?s? models the dependence of action s on the context z.
Besides online advertising, a similar model has been proposed and experimentally studied by [6]
for the problem of contextual news recommendation (see Section 7 for a discussion). Both these
problems are addressed by CGP-UCB by choosing KS = I as the m ? m identity matrix, and KZ
5
5
4.5
4.5
4
4
GP?UCB
merge contexts
3.5
Maximum regret Rt
Average regret Rt
3.5
3
GP?UCB
ignore context
2.5
GP?UCB
merge context
2
3
2.5
GP?UCB
ignore contexts
2
1.5
CGP?UCB
1.5
1
1
0.5
0
0.5
50
100
150
200
Trial t
250
300
350
0
0
CGP?UCB
10
20
30
Trial t per task
40
50
(a) Average regret
(b) Maximum regret
(c) Context similarity
Figure 2: CGP-UCB applied to the average (a) and maximum regret over all molecules (b) for three methods
on MHC benchmark. (c) Context similarity using inter task predictions.
as the linear kernel on the features2 . In this application, additive kernel combinations may be useful
to model temporal dependencies of the overall click probabilities (e.g., during evening, users may
or may not be more likely to click on an ad than during business hours).
Learning to control complex systems. Suppose we have a complex system and would like to
achieve some desired behavior, for example robot walking [10]. In such a setting, we may wish to
estimate a controller in a data-driven manner; however, we would also like to maximize the performance of the estimated controller, resulting in an exploration?exploitation tradeoff. In addition to
controller parameters s ? S ? RdS , the system may be exposed to changing (in an uncontrollable
manner) environmental conditions, which are provided as context z ? Z ? RdZ . The goal is thus
to learn, which control parameters to apply in which conditions to maximize system performance.
In this case, we may consider using a linear kernel kZ (z, z0 ) = zT z0 to model the dependence of
the performance on environmental features, and a squared exponential kernel kS (s, s0 ) to model the
smooth but nonlinear
p response of the system to the chosen control parameters. Theorems 1 and 2
bound RT = O? ( T dZ (log T )dS +1 ). Additive kernel combinations may allow to model the fact
that control in some contexts (environments) is inherently more difficult (or noisy).
Multi-task experimental design. Suppose we would like to perform a sequence of related
experiments. In particular, in Section 6.1 we consider the case of vaccine design. The aim is to
discover peptide sequences which bind to major histocompatibility complex molecules (MHC).
MHC molecules present fragments of proteins from within the cell to T cells, resulting in healthy
cells being left alone, while cells containing foreign proteins to be attacked by the immune system.
Here, each experiment is associated with a set of features (encoding the MHC alleles), which are
provided as context z. The goal in each experiment is to choose a stimulus (the vaccine) s ? S
that maximizes an observed response (binding affinity). In this case, we may consider using a finite
inter-task covariance kernel KZ with rank mZ to model the similarity of different experiments, and
a Gaussian kernel kS (s, s0 ) to model the smooth but nonlinear dependency
p of the stimulus response
on the experimental parameters. Theorems 1 and 2 bound RT = O? ( T mZ (log T )dS +1 ).
Spatiotemporal monitoring with sensor networks. Suppose we have deployed a network of
sensors, which we wish to use to monitor the maximum temperature in a building. Due to battery
limitations, we would like, at each timestep, to only activate few sensors. We can cast this problem
in the contextual bandit setting, where time of day is considered as the context z ? Z, and each
action s ? S corresponds to picking a sensor. Due to the fact that the sun is moving relative to the
building, the hottest point in the building changes depending on the time of the day, and we would
like to learn which sensors to activate at which time of the day. In this problem, we would estimate
a joint spatio-temporal covariance function (e.g., using the Mat?ern kernel), and use it for inference.
We show experimental results for this problem in Section 6.2.
6
Experiments
In our two experimental case studies, we aim to study how much context information can help. We
compare three methods: Ignoring (correlation between) contexts by running a separate instance of
GP-UCB for every context (i.e., ignoring measurements from all but the current molecule or time);
2
[6] also propose a more complex hybrid model that uses features shared between the actions. This model
is also captured in our framework by adding a second kernel function, which composes a low-rank (instead of
I) matrix with the linear kernel.
6
2.5
4.5
4
GP?UCB
ignore context
GP?UCB
merge context
Temperature (C)
1.5
1
15
GP?UCB
ignore context
3.5
Temperature error (C)
Temperature error (C)
2
3
2.5
GP?UCB
merge context
2
1.5
10
5
0
1
0.5
0.5
CGP?UCB
0
0
10
20
30
40
Time (h)
50
60
CGP?UCB
70
0
0
10
20
30
40
Time (h)
50
60
70
?5
0
10
20
30
40
Time (h)
50
60
70
(a) Using minimum
(b) Using average
(c) Test data
Figure 3: CGP-UCB applied to temperature data from a network of 46 sensors at Intel Research Berkeley.
running a single instance of GP-UCB, merging together the context information (i.e., ignoring the
molecule or time information); and running CGP-UCB, conditioning on measurements made at
different contexts (MHC molecules considered / times of day) using the product kernel.
6.1
Multi-task Bayesian Optimization of MHC class-I binding affinity
We perform experiments in the multi-task vaccine design problem introduced in Section 5.3. In
our experiments, we focus on a subset of MHC class I molecules that have affinity binding scores
available. Each experimental design task corresponds to searching for maximally binding peptides,
which is a vital step in the design of peptide-based vaccines. We use the data from [11], which is
part of a benchmark set of MHC class I molecules [12]. The data contains binding affinities (IC50
values), as well as features extracted from the peptides. Peptides with IC50 values greater than 500
nM were considered non-binders, all others binders. We convert the IC50 values into negative log
scale, and normalize them so that 500nM corresponds to zero, i.e. ? log10 (IC50 ) + log10 (500).
In total, we consider identifying peptides for seven different MHC molecules (i.e., seven related
tasks = contexts). The context similarity was obtained using the hamming distance between amino
acids in the binding pocket [11] (see Figure 2(c)), and we used the Gaussian kernel on the extracted
features. We used a random subset of 1000 examples to estimate hyperparameters, and then
considered each MHC allele in the order shown in Figure 2(c). For each MHC molecule, we ran
CGP-UCB for 50 trials.
From Figure 2(a) we see that for the first three molecules (up to trial 150), which are strongly
correlated, merging contexts and CGP-UCB perform similarly, and both perform better than
ignoring observations from other MHC molecules previously considered. However, the fourth
molecule (A 0201) has little correlation with the earlier ones, and hence simply merging contexts
performs poorly. We also wish to study, how long it takes, in the worst-case over all seven
molecules, to identify a peptide with binding affinity of desired strength. Therefore, in Figure 2(b),
we plot, for each t from 1 to 50, the largest (across the seven tasks) discrepancy between the
maximum achievable affinity, and the best affinity score observed in the first t trials. We find that
by exploiting correlation among contexts, CGP-UCB outperforms the two baseline approaches.
6.2
Learning to Monitor Sensor Networks
We also apply CGP-UCB to the spatiotemporal monitoring problem described in Section 5. We
use data from 46 sensors deployed at Intel Research, Berkeley. The data set contains 4 days of
data, sampled at 5 minute intervals. We take the first 24 hours to fit (by maximizing the marginal
likelihood) parameters of a spatio-temporal covariance function (we choose the Mat?ern kernel with
? = 2.5). On the remaining 3 days of data (see Figure 3(c)), we then proceed by, at each time step,
sequentially activating 5 sensors and reporting the regret of the average and maximum temperature
measured (hereby the regret is the error in estimating the actual maximum temperature reported by
any of the 46 sensors).
Figure 3(a) (using the maximum temperature among the 5 readings each time step) and 3(b) (using
the average temperature) show the results of this experiment. Notice that ignoring contexts performs
poorly. Merging contexts (single instance of context-free GP-UCB) performs best for the first few
timesteps (since temperature is very similar, and the highest temperature sensor does not change).
However, after running CGP-UCB for more than one day of data (i.e., until context reoccurs), it
outperforms the other methods, since it is able to learn to query the maximum temperature sensors
as a function of the time of the day.
7
7
Related Work
The use of upper confidence bounds to trade off exploration and exploitation has been introduced
by [13], and studied thereafter [1, 14, 15, 16]. The approach for the classical k-armed bandit setting [17] has been generalized to more complex settings, such as infinite action sets and linear
payoff functions [14, 18], Lipschitz continuous payoff functions [15] and locally-Lipschitz functions [19]. However, there is a strong tradeoff
between strength of the assumptions and achievable
?
regret bounds. For example, while O(d T log T ) can be achieved in the linear setting [14], if only
d+1
Lipschitz continuity is assumed, regret bounds scale as ?(T d+2 ) [15]. Srinivas et al [3] analyze the
case where the payoff function is sampled from a GP, which encodes configurable assumptions. The
present work builds on and strictly generalizes their approach. In fact, in the context free case, CGPUCB is precisely the GP-UCB algorithm of [3]. The ability to incorporate contextual information,
however, significantly expands the class of applications of GP-UCB. Besides handling context and
bounding the stronger notion of contextual regret, in this paper we provide generic techniques for
obtaining regret bounds for composite kernels. An alternative rule (in the context free setting) is the
Expected Improvement algorithm [20], for which no bounds on the cumulative regret are known.
For contextual bandit problems, work has focused on the case of finitely many actions, where the
goal is to obtain sublinear contextual regret against classes of functions mapping context to actions
[1]. This setting resembles (multi-class) classification problems, and regret bounds can be given
in terms of the VC dimension of the hypothesis space [2]. [6] present an approach, LinUCB, that
assumes that payoffs for each action are linear combinations (with unknown coefficients) of context
features. In [5], it is proven that a modified variant of LinUCB achieves sublinear contextual regret.
Theirs is a special case of our setting (assuming a linear kernel for the contexts and diagonal kernel
for the actions). Another related approach is taken by Slivkins [21], who presents several algorithms
with sublinear contextual regret for the case of infinite actions and contexts, assuming Lipschitz
continuity of the payoff function in the context-action space. In [22], this approach is generalized
to select sets of actions, and applied to a problem of diverse retrieval in large document collections.
However, in contrast to CGP-UCB, this approach does not enable stronger guarantees for smoother
or more structured payoff functions.
The construction of composite kernels is common in the context of multitask learning with GPs
[23, 24, 25]. Instead of considering a scalar GP with joint feature space f : S ? Z ? R, they
consider a multioutput GP fvec : S ? RZ , and introduce output correlations as linear combinations
of latent channels or convolutions of GPs [25]. Our results are complementary to this line of work, as
we can make use of such kernel functions for ?multi-task Bayesian optimization?. Theorems 2 and 3
provide convenient ways for deriving regret bounds for such problems. There has been a significant
amount of work on GP optimization and response surface methods [26]. For example, [27] consider
sharing information across multiple sessions in a problem of parameter identification in animation
design. We are not aware of theoretical convergence results in case of context information, and our
Theorem 1 provides the first general approach to obtain rates.
8
Conclusions
We have described an algorithm, CGP-UCB, which addresses the exploration?exploitation tradeoff
in a large class of contextual bandit problems, where the regularity of the payoff function defined
over the action?context space is expressed in terms of a GP prior. As we discuss in Section 5, by
considering various kernel functions on actions and contexts this approach allows to handle a variety
of applications. We show that, similar as in the context free case studied by [3], the key quantity
governing the regret is a mutual information between experiments performed by CGP-UCB and the
GP prior (Theorem 1). In contrast to prior work, however, our approach bounds the much stronger
notion of contextual regret (competing with the optimal mapping from contexts to actions). We
prove that in many practical settings, as discussed in Section 5, the contextual regret is sublinear. In
addition, Theorems 2 and 3 provide tools to construct bounds on this information theoretic quantity
given corresponding bounds on the context and actions. We also demonstrate the effectiveness of
CGP-UCB on two applications: computational vaccine design and sensor network management. In
both applications, we show that utilizing context information in the joint covariance function reduces
regret in comparison to ignoring or naively using the context.
Acknowledgments The authors wish to thank Christian Widmer for providing the MHC data, as
well as Daniel Golovin and Aleksandrs Slivkins for helpful discussions. This research was partially
supported by ONR grant N00014-09-1-1044, NSF grants CNS-0932392, IIS-0953413, DARPA
MSEE grant FA8650-11-1-7156 and SNF grant 200021 137971.
8
References
[1] Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. JMLR, 3, 2002.
[2] John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In
NIPS, 2008.
[3] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting:
No regret and experimental design. In ICML, 2010.
[4] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[5] Wei Chu, Lihong Li, Lev Reyzin, , and Robert E. Schapire. Contextual bandits with linear payoff functions. In AISTATS, 2011.
[6] Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. A contextual-bandit approach to personalized news article recommendation. In WWW, 2010.
[7] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley Interscience, 1991.
[8] G. Wahba. Spline Models for Observational Data. SIAM, 1990.
[9] Naoki Abe, Alan W. Biermann, and Philip M. Long. Reinforcement learning with immediate rewards and
linear hypotheses. Algorithmica, 37(4):263?293, 2003.
[10] D. Lizotte, T. Wang, M. Bowling, and D. Schuurmans. Automatic gait optimization with Gaussian process
regression. In IJCAI, pages 944?949, 2007.
[11] C. Widmer, N. Toussaint, Y. Altun, and G. R?atsch. Inferring latent task structure for multitask learning
by multiple kernel learning. BMC Bioinformatics, 11(Suppl 8:S5), 2010.
[12] B. Peters et. al. A community resource benchmarking predictions of peptide binding to mhc-i molecules.
PLoS Computational Biology, 2(6):e65, 2006.
[13] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Adv. Appl. Math., 6:4, 1985.
[14] V. Dani, T. P. Hayes, and S. Kakade. The price of bandit information for online optimization. In NIPS,
2007.
[15] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In STOC, pages 681?690,
2008.
[16] L. Kocsis and C. Szepesv?ari. Bandit based monte-carlo planning. In ECML, 2006.
[17] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Mach.
Learn., 47(2-3):235?256, 2002.
[18] V. Dani, T. P. Hayes, and S. M. Kakade. Stochastic linear optimization under bandit feedback. In COLT,
2008.
[19] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesv?ari. Online optimization in X-armed bandits. In NIPS,
2008.
[20] S. Gr?unew?alder, J-Y. Audibert, M. Opper, and J. Shawe-Taylor. Regret bounds for gaussian process bandit
problems. In AISTATS, 2010.
[21] Aleksandrs Slivkins. Contextual bandits with similarity information. Technical Report 0907.3986, arXiv,
2009.
[22] Aleksandrs Slivkins, Filip Radlinski, and Sreenivas Gollapudi. Learning optimally diverse rankings over
large document collections. In ICML, 2010.
[23] Kai Yu, Volker Tresp, and Anton Schwaighofer. Learning gaussian processes from multiple tasks. In
ICML, 2005.
[24] Edwin V. Bonilla, Kian Ming A. Chai, and Christopher K. I. Williams. Multi-task gaussian process
prediction. In NIPS, 2008.
?
[25] Mauricio A. Alvarez,
David Luengo, Michalis K. Titsias, and Neil D. Lawrence. Efficient multioutput
gaussian processes through variational inducing kernels. In AISTATS, 2010.
[26] E. Brochu, M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions,
with application to active user modeling and hierarchical reinforcement learning. In TR-2009-23, UBC,
2009.
[27] Eric Brochu, Tyson Brochu, and Nando de Freitas. A bayesian interactive optimization approach to
procedural animation design. In Eurographics, 2010.
9
| 4487 |@word mild:1 trial:5 exploitation:8 multitask:2 achievable:3 norm:3 stronger:3 d2:2 covariance:10 decomposition:1 pick:1 incurs:1 tr:1 reduction:1 contains:2 fragment:1 score:2 daniel:1 document:6 rkhs:3 outperforms:4 past:1 freitas:2 current:1 contextual:38 chu:2 john:2 multioutput:2 additive:6 christian:1 plot:1 fvec:1 e65:1 alone:1 greedy:2 generative:1 guess:1 provides:1 math:1 zhang:1 along:4 constructed:1 differential:1 prove:4 interscience:1 introduce:1 manner:3 x0:12 inter:2 expected:3 behavior:1 planning:1 multi:8 ming:1 little:1 armed:5 actual:1 considering:2 provided:3 estimating:3 competes:1 bounded:7 discover:1 agnostic:2 maximizes:1 msee:1 textbook:1 z:1 supplemental:2 guarantee:1 temporal:3 berkeley:2 every:2 expands:1 interactive:1 control:4 grant:4 mauricio:1 positive:2 service:1 understood:1 bind:1 naoki:1 encoding:3 mach:1 lev:1 path:1 merge:4 studied:5 quantified:1 k:27 resembles:1 challenging:1 binder:2 appl:1 practical:3 acknowledgment:1 practice:1 regret:45 definite:2 snf:1 eth:1 mhc:14 composite:12 matching:1 revealing:1 confidence:6 convenient:2 significantly:1 protein:2 altun:1 cannot:1 context:97 optimize:1 www:1 yt:6 dz:1 maximizing:1 williams:2 flexibly:1 convex:1 focused:1 formalized:1 identifying:1 immediately:1 rule:3 utilizing:1 deriving:3 retrieve:2 handle:2 notion:4 searching:1 pt:1 suppose:7 construction:2 user:4 gps:3 us:2 hypothesis:2 trend:2 element:1 expensive:1 walking:1 observed:5 role:1 solved:1 capture:2 worst:1 wang:1 news:3 sun:1 adv:1 plo:1 trade:6 mz:5 highest:1 ran:1 intuition:2 environment:1 complexity:1 reward:1 battery:1 ong:2 depend:1 reviewing:1 ic50:4 exposed:1 predictive:2 incur:2 titsias:1 eric:1 edwin:1 joint:7 darpa:1 various:2 regularizer:2 activate:2 monte:1 query:4 lift:1 rds:1 choosing:4 outcome:1 kai:1 solve:1 ability:1 fischer:1 neil:1 gp:40 jointly:1 noisy:3 online:5 kocsis:1 sequence:5 propose:2 gait:1 product:8 zm:1 relevant:2 combining:3 reyzin:1 mixing:1 poorly:2 achieve:2 intuitive:2 inducing:1 normalize:1 gollapudi:1 exploiting:1 convergence:1 regularity:5 cluster:2 zts:1 ijcai:1 chai:1 help:1 derive:1 develop:4 illustrate:1 depending:1 measured:3 finitely:1 strong:1 trading:1 come:1 switzerland:1 unew:1 stochastic:1 allele:2 exploration:10 vc:1 nando:1 enable:1 observational:1 material:2 implementing:1 require:1 activating:1 generalization:2 uncontrollable:2 strictly:1 hold:3 considered:5 lawrence:1 mapping:5 predict:1 major:2 achieves:3 healthy:1 sensitive:1 peptide:8 largest:1 robbins:1 tool:2 cora:1 dani:2 offs:1 mit:1 sensor:16 gaussian:22 aim:2 modified:1 varying:1 volker:1 conjunction:1 derived:1 focus:1 properly:1 improvement:1 rank:7 likelihood:1 contrast:2 seeger:1 lizotte:1 attains:1 baseline:1 helpful:1 inference:3 dependent:2 foreign:1 bandit:26 overall:3 among:2 classification:1 colt:1 priori:1 special:2 mutual:2 marginal:2 aware:1 construct:1 having:2 sampling:2 bmc:1 biology:1 sreenivas:1 yu:1 icml:3 tyson:1 discrepancy:1 t2:5 stimulus:2 others:1 spline:1 few:2 report:1 argmax:2 algorithmica:1 cns:1 possibility:1 analyzed:1 light:1 behind:2 kt:7 experience:1 ln3:1 stoltz:1 taylor:1 desired:2 theoretical:1 uncertain:1 instance:3 modeling:2 earlier:1 cover:1 fleshing:1 cost:1 deviation:3 subset:2 gr:1 optimally:1 reported:1 configurable:1 dependency:3 supx:1 varies:2 spatiotemporal:2 combined:1 st:8 siam:1 off:5 picking:3 together:1 quickly:1 squared:5 eurographics:1 management:4 satisfied:1 choose:8 possibly:1 containing:1 nm:2 cesa:1 derivative:1 style:1 li:2 account:4 de:2 coefficient:1 bonilla:1 audibert:1 ad:6 depends:3 ranking:1 performed:2 closed:2 analyze:2 start:2 contribution:1 variance:5 who:2 acid:1 gathered:2 identify:2 generalize:1 anton:1 bayesian:4 identification:1 none:1 carlo:1 advertising:2 monitoring:2 composes:1 sharing:1 definition:1 against:1 naturally:2 associated:4 dbr:1 proof:2 hereby:2 hamming:1 sampled:5 gain:8 hilbert:1 pocket:1 brochu:3 auer:2 higher:1 day:8 response:4 maximally:1 wei:2 alvarez:1 formulation:1 evaluated:1 strongly:1 generality:2 governing:2 lastly:2 correlation:5 d:2 working:1 until:1 langford:2 christopher:1 nonlinear:2 continuity:2 perhaps:1 building:3 hence:1 widmer:2 round:8 game:1 during:2 bowling:1 alder:1 m:3 generalized:2 theoretic:2 complete:1 demonstrate:1 performs:3 temperature:12 meaning:1 variational:1 consideration:1 regularizing:1 ari:2 common:2 conditioning:1 discussed:1 theirs:1 refer:1 measurement:2 significant:1 s5:1 multiarmed:1 smoothness:5 rd:1 automatic:1 consistency:1 similarly:2 session:1 shawe:1 immune:1 lihong:2 moving:1 robot:1 similarity:7 surface:1 v0:2 multivariate:2 posterior:5 inf:1 driven:1 n00014:1 onr:1 captured:2 minimum:1 greater:1 maximize:3 semi:1 smoother:1 multiple:3 ii:1 reduces:1 cgp:34 smooth:2 alan:1 technical:1 long:2 retrieval:1 lai:1 plugging:1 ensuring:1 prediction:3 involving:1 variant:1 regression:1 ae:1 controller:3 metric:1 arxiv:1 kernel:66 suppl:1 achieved:2 cell:4 c1:2 receive:6 addition:3 szepesv:2 krause:2 addressed:1 interval:1 crucial:1 subject:1 induced:1 incorporates:1 effectiveness:1 call:1 vital:1 automated:3 variety:4 xj:1 fit:1 timesteps:1 competing:2 click:2 wahba:1 andreas:1 tradeoff:4 f:3 peter:2 fa8650:1 proceed:1 action:62 luengo:1 generally:1 useful:1 features2:1 amount:2 nonparametric:1 locally:1 schapire:2 fz:3 kian:1 nsf:1 tutorial:1 notice:1 estimated:4 per:1 diverse:2 mat:5 express:1 key:4 four:1 thereafter:1 procedural:1 achieving:1 drawn:1 monitor:2 changing:1 timestep:1 asymptotically:1 convert:1 compete:1 uncertainty:3 fourth:1 reporting:1 decision:1 bound:28 display:2 cheng:1 strength:2 precisely:2 phrased:1 encodes:2 personalized:1 kleinberg:1 argument:1 ern:5 department:1 structured:1 according:1 combination:7 across:3 kakade:3 making:1 s1:2 intuitively:1 pr:2 gathering:2 taken:1 equation:1 zurich:2 previously:2 resource:1 turn:1 discus:1 generalizes:2 available:2 apply:2 hierarchical:1 generic:3 appropriate:1 alternative:2 rz:1 thomas:1 assumes:1 running:4 remaining:1 michalis:1 log10:2 exploit:2 build:1 classical:1 objective:1 already:1 quantity:5 rt:10 dependence:2 diagonal:1 exhibit:1 affinity:7 linucb:2 reversed:1 separate:1 distance:1 thank:1 philip:1 seven:4 collected:1 assuming:2 besides:2 illustration:1 providing:1 difficult:1 robert:2 stoc:1 negative:1 design:13 zt:9 redefined:1 unknown:4 perform:4 bianchi:1 upper:5 observation:6 reoccurs:1 convolution:1 benchmark:3 finite:12 attacked:1 ecml:1 immediate:1 payoff:29 subsume:1 incorporated:1 y1:3 reproducing:1 arbitrary:3 aleksandrs:3 abe:1 community:1 introduced:2 david:1 pair:6 required:1 specified:1 cast:1 z1:2 connection:1 slivkins:5 hour:2 nip:4 address:1 able:4 distribution1:1 reading:1 challenge:2 max:2 including:1 critical:1 demanding:1 natural:6 business:1 hybrid:1 rdz:1 deemed:2 naive:1 tresp:1 review:1 literature:1 prior:7 epoch:1 relative:1 fully:1 expect:1 loss:1 sublinear:10 limitation:1 allocation:1 proven:1 toussaint:1 upfal:1 krausea:1 gather:1 s0:9 article:1 playing:1 summary:1 supported:1 soon:1 free:8 rasmussen:1 weaker:1 allow:1 taking:3 munos:1 benefit:1 feedback:2 dimension:2 opper:1 world:1 cumulative:3 gram:2 kz:28 author:1 collection:3 made:2 reinforcement:2 adaptive:1 obtains:1 compact:1 ignore:4 sequentially:1 hayes:2 active:1 filip:1 assumed:2 spatio:2 search:1 evening:1 continuous:1 quantifies:3 latent:2 learn:4 channel:1 molecule:15 golovin:1 inherently:1 ignoring:7 correlated:1 obtaining:1 schuurmans:1 complex:7 necessarily:3 constructing:2 da:1 aistats:3 main:1 bounding:4 noise:6 hyperparameters:1 animation:2 complementary:1 amino:1 x1:2 intel:2 benchmarking:1 martingale:1 deployed:2 vaccine:8 tong:1 wiley:1 inferring:1 guiding:1 wish:4 exponential:5 jmlr:1 third:1 externally:1 theorem:14 z0:6 rk:1 minute:1 specific:3 xt:4 naively:2 restricting:1 adding:1 merging:4 conditioned:3 entropy:2 smoothly:1 simply:1 likely:1 bubeck:1 expressed:2 schwaighofer:1 partially:1 doubling:1 scalar:1 recommendation:3 binding:8 ch:2 corresponds:3 ubc:1 environmental:4 satisfies:1 extracted:2 goal:4 identity:1 consequently:1 shared:1 lipschitz:4 price:1 experimentally:2 change:2 infinite:4 uniformly:1 total:1 experimental:8 biermann:1 shannon:1 ucb:55 shedding:1 atsch:1 select:3 formally:1 radlinski:1 bioinformatics:1 ethz:2 incorporate:2 evaluate:4 d1:2 srinivas:2 handling:1 |
3,852 | 4,488 | Gradient-based kernel method for feature extraction
and variable selection
Kenji Fukumizu
The Institute of Statistical Mathematics
10-3 Midori-cho, Tachikawa, Tokyo 190-8562 Japan
[email protected]
Chenlei Leng
National University of Singapore
6 Science Drive 2, Singapore, 117546
[email protected]
Abstract
We propose a novel kernel approach to dimension reduction for supervised learning: feature extraction and variable selection; the former constructs a small number of features from predictors, and the latter finds a subset of predictors. First,
a method of linear feature extraction is proposed using the gradient of regression
function, based on the recent development of the kernel method. In comparison with other existing methods, the proposed one has wide applicability without
strong assumptions on the regressor or type of variables, and uses computationally
simple eigendecomposition, thus applicable to large data sets. Second, in combination of a sparse penalty, the method is extended to variable selection, following
the approach by Chen et al. [2]. Experimental results show that the proposed methods successfully find effective features and variables without parametric models.
1
Introduction
Dimension reduction is involved in most of modern data analysis, in which high dimensional data
must be handled. There are two categories of dimension reduction: feature extraction, in which a
linear or nonlinear mapping to a low-dimensional space is pursued, and variable selection, in which
a subset of variables is selected. This paper discusses both the methods in supervised learning.
Let (X, Y ) be a random vector such that X = (X 1 , . . . , X m ) ? Rm . The domain of Y can be
arbitrary, either continuous, discrete, or structured. The goal of dimension reduction in supervised
setting is to find such features or a subset of variables X that explain Y as effectively as possible.
This paper focuses linear dimension reduction, in which linear combinations of the components of
X are used to make effective features. Although there are many methods for extracting nonlinear
features, this paper confines its attentions on linear features, since linear methods are more stable
than nonlinear feature extraction, which depends strongly on the choice of the nonlinearity, and after
establishing a linear method, extension to a nonlinear one would not be difficult.
We first develop a method for linear feature extraction with kernels, and extend it to variable selection with a sparseness penalty. The most significant point of the proposed methods is that we do
not assume any parametric models on the conditional probability, or make strong assumptions on
the distribution of variables. This differs from many other methods, particularly for variable selection, where a specific parametric model is often assumed. Beyond the classical approaches such as
Fisher Discriminant Analysis and Canonical Correlation Analysis to linear dimension reduction, the
modern approach is based on the notion of conditional independence; we assume for the distribution
p(Y |X) = p?(Y |B T X)
or equivalently
T
Y?
?X | B T X,
(1)
m
where B is a projection matrix (B B = Id ) onto a d-dimensional subspace (d < m) in R , and
wish to estimate B. For variable selection, we further assume that some rows of B may be zero.
The subspace spanned by the columns of B is called the effective direction for regression, or EDR
space [14]. Our goal is thus to estimate B without specific parametric models for p(y|x).
1
First, consider the linear feature extraction based on Eq. (1). The first method using this formulation is the sliced inverse regression (SIR, [13]), which employs the fact that the inverse regression
E[X|Y ] lies in the EDR space under some assumptions. Many methods have been proposed in this
vein of inverse regression ([4, 12] among others). While the methods are computationally simple,
they often need some strong assumptions on the distribution of X such as elliptic symmetry.
There are two most relevant works to this paper. The first one is the dimension reduction with the
gradient of regressor E[Y |X = x] [11, 17]. As explained in Sec. 2.1, under Eq. (1) the gradient
is contained in the EDR space. One can thus estimate the space by some standard nonparametric
method. There are some limitations in this approach, however: the nonparametric gradient estimation in high-dimensional spaces is challenging, and the method may not work unless the noise
is additive. The second one is the kernel dimension reduction (KDR, [8, 9, 28]), which uses the
kernel method for characterizing the conditional independence to overcome various limitations of
existing methods. While KDR applies to a wide class of problems without any strong assumptions
on the distributions or types of X or Y , and shows high estimation accuracy for small data sets, its
optimization has a problem: the gradient descent method used for KDR may have local optima, and
needs many matrix inversions, which prohibits application to high-dimensional or large data.
We propose a kernel method for linear feature extraction using the gradient-based approach, but
unlike the existing ones [11, 17], the gradient is estimated based on the recent development of the
kernel method [9, 19]. It solves the problems of existing methods: by virtue of the kernel method, Y
can be of arbitrary type, and the kernel estimator is stable without careful decrease of bandwidth. It
solves also the problem of KDR: the estimator by an eigenproblem needs no numerical optimization.
The method is thus applicable to large and high-dimensional data, as we demonstrate experimentally.
Second, by using the above feature extraction in conjunction with a sparseness penalty, we propose a
novel method for variable selection. Recently extensive studies have been done for variable selection
with a sparseness penalty such as LASSO [23] and SCAD [6]. It is also known that with appropriate
choice of regularization coefficients they have oracle property [6, 25, 30]. These methods, however,
use some specific model for regression such as linear regression, which is a limitation of the methods.
Chen et al. [2] proposed a novel method for sparse variable selection based on the objective function
of linear feature extraction formulated as an eigenproblem such as SIR. We follow this approach to
derive our method for variable selection. Unlike the methods used in [2], the proposed one does not
require strong assumptions on the regressor or distribution, and thus provides a variable selection
method based on the conditional independence irrespective of the regression model.
2
2.1
Gradient-based kernel dimension reduction
Gradient of a regression function and dimension reduction
We review the basic idea of the gradient-based method [11, 17] for dimension reduction. Suppose
Y is an R-valued random variable. If the assumption of Eq. (1) holds, we have
R ?
R
R ?
?
?
?(y|B T x)dy = B y ?z
p?(y|z)z=B T x dy,
?x E[Y |X = x] = ?x yp(y|x)dy = y ?x p
?
E[Y |X = x] at any x is contained in the EDR space. Based on
which implies that the gradient ?x
this fact, the average derivative estimates (ADE, [17]) has been proposed to estimate B. In the more
recent method [11], a standard local linear least squares with a smoothing kernel (not necessarily
positive definite, [5]) is used for estimating the gradient, and the dimensionality of the projection
is continuously reduced to the desired one in the iteration. Since the gradient estimation for highdimensional data is difficult in general, the iterative reduction is expected to give more accurate
estimation. We call the method in [11] iterative average derivative estimates (IADE) in the sequel.
2.2
Kernel method for estimating gradient of regression
For a set ?,
Pna (R-valued) positive definite kernel k on ? is a symmetric kernel k : ? ? ? ? R
such that i,j=1 ci cj k(xi , xj ) ? 0 for any x1 , . . . , xn in ? and c1 , . . . , cn ? R. It is known that
a positive definite kernel on ? uniquely defines a Hilbert space H consisting of functions on ?,
in which the reproducing property hf, k(?, x)iH = f (x) (?f ? H) holds, where h?, ?iH is the inner
product of H. The Hilbert space H is called the reproducing kernel Hilbert space (RKHS) associated
with k. We assume that an RKHS is always separable.
2
In deriving a kernel method based on the approach in Sec. 2.1, the fundamental tool is the reproducing property for the derivative of a function. It is known (e.g., [21] Sec. 4.3) that if a positive
definite kernel k(x, y) on an open set in the Euclidean space is continuously differentiable with respect to x and y, every f in the corresponding RKHS H is continuously differentiable. If further
?
?x k(?, x) ? H, we have
D ?
E
?f
= f,
k(?, x) .
(2)
?x
?x
H
This reproducing property combined with the following kernel estimator of the conditional expectation (see [8, 9, 19] for details) will provide a method for dimension reduction. Let (X, Y ) be a
random variable on X ? Y with probability P . We always assume that the p.d.f. p(x, y) and the
conditional p.d.f. p(y|x) exist, and that a positive definite kernel is measurable and bounded. Let kX
and kY be positive definite kernels on X and Y, respectively, with respective RKHS HX and HY .
The (uncentered) covariance operator CY X : HX ? HY is defined by the equation
hg, CY X f iHY = E[f (X)g(Y )] = E hf, ?X (X)iHX h?Y (Y ), giHY
(3)
for all f ? HX , g ? HY , where ?X (x) = kX (?, x) and ?Y (y) = kY (?, y). Similarly, CXX
denotes the operator on HX that satisfies hf2 , CXX f1 i = E[f2 (X)f1 (X)] for any f1 , f2 ? HX .
These definitions are straightforward extensions of the ordinary covariance matrices, if we consider the covariance of the random vectors ?X (X) and ?Y (Y ) on the RKHSs. One of the advantages of the kernel method is that estimation with finite data is straightforward. Given i.i.d. sample
(X1 , Y1 ), . . . , (Xn , Yn ) with law P , the covariance operator is estimated by
b (n) f = 1 Pn kY (?, Yi )hkX (?, Xi ), f iH
b (n) f = 1 Pn kX (?, Xi )hkX (?, Xi ), f iH . (4)
C
C
X
X
YX
XX
i=1
i=1
n
n
It is known [8] that if E[g(Y )|X = ?] ? HX holds for g ? HY , then we have CXX E[g(Y )|X =
?] = CXY g. If further CXX is injective1 , this relation can be expressed as
E[g(Y )|X = ?] = CXX ?1 CXY g.
(5)
While the assumption E[g(Y )|X = ?] ? HX may not hold in general, we can nonetheless obtain an
empirical estimator based on Eq. (5), namely,
(n)
(n)
?1 b
b
(C
CXY g,
XX + ?n I)
where ?n is a regularization coefficient in Tikhonov-type regularization. Note that the above expression is the kernel ridge regression of g(Y ) on X. As we discuss in Supplements, we can in fact
prove rigorously that this estimator converges to E[g(Y )|X = ?].
Assume now that X = Rm , CXX is injective, kX (x, x
?) is continuously differentiable, E[g(Y )|X =
?
x] ? HX for any g ? HY , and ?x kX (?, x) ? R(CXX ), where R denotes the range of the operator.
?1
?1 ?kX (?,x)
?
From Eqs. (5) and (2), ?x
E[g(Y )|X = x] = hCXX
CXY g, ?kX?x(?,x) i = hg, CY X CXX
i.
?x
With g = kY (?, y?), we obtain the gradient of regression of the feature vector ?Y (Y ) on X as
?
?1 ?kX (?, x)
E[?Y (Y )|X = x] = CY X CXX
.
?x
?x
2.3
(6)
Gradient-based kernel method for linear feature extraction
?
It follows from the same argument as in Sec. 2.1 that ?x
E[kY (?, y)|X = x] = ?(x)B with an
m
operator ?(x) from R to HY , where we use a slight abuse of notation by identifying the operator
?(x) with a matrix. In combination with Eq. (6), we have
E
D ?k (?, x)
X
?1
?1 ?kX (?, x)
=: M (x),
(7)
, CXX
CXY CY X CXX
B T h?(x), ?(x)iHY B =
?x
?x
HX
which shows that the eigenvectors for non-zero eigenvalues of m ? m matrix M (x) are contained
in the EDR space. This fact is the basis of our method. In contrast to the conventional gradientbased method described in Sec. 2.1, this method incorporates high (or infinite) dimensional regressor
E[?Y (Y )|X = x].
1
Noting hCXX f, f i = E[f (X)2 ], it is easy to see that CXX is injective, if kX is a continuous kernel on a
topological space X , and PX is a Borel probability measure such that P (U ) > 0 for any open set U in X .
3
Given i.i.d. sample (X1 , Y1 ), . . . , (Xn , Yn ) from the true distribution, based on the empirical covariance operators Eq. (4) and regularized inversions, the matrix M (x) is estimated by
?1 ?kX (?,x)
b (n) C
b (n) b (n)
b (n) + ?n I ?1 C
cn (x) = ?kX (?,x) , C
M
XY Y X CXX + ?n I
XX
?x
?x
= ?kX (x)T (GX + n?n I)?1 GY (GX + n?n I)?1 ?kX (x),
(8)
where GX and GY are the Gram matrices (kX (Xi , Xj )) and (kY (Yi , Yj )), respectively, and
?kX (x) = (?kX (X1 , x)/?x, ? ? ? , ?kX (Xn , x)/?x)T ? Rn .
As the eigenvectors of M (x) are contained in the EDR space for any x, we propose to use the
average of M (Xi ) over all the data points Xi , and define
cn (Xi ) = 1 Pn ?kX (Xi )T (GX + n?n In )?1 GY (GX + n?n In )?1 ?kX (Xi ).
? n := 1 Pn M
M
n
i=1
n
i=1
? n the gradient-based kernel dimension reduction
We call the dimension reduction with the matrix M
(gKDR). For linear feature extraction, the projection matrix B in Eq. (1) is then estimated simply
? n . We call this method gKDR-FEX.
by the top d eigenvectors of M
The proposed method applies to a wide class of problems; in contrast to many existing methods,
the gKDR-FEX can handle any type of data for Y including multinomial or structured variables,
and make no strong assumptions on the regressor or distribution of X. Additionally, since the
gKDR incorporates the high dimensional feature vector ?Y (Y ), it works for any regression relation
including multiplicative noise, for which many existing methods such as SIR and IADE fail.
As in all kernel methods, the results of gKDR depend on the choice of kernels. We use the crossvalidation (CV) for choosing kernels and parameters, combined with some regression or classification method. In this paper, the k-nearest neighbor (kNN) regression / classification is used in CV
for its simplicity: for each candidate of a kernel or parameter, we compute the CV error by the kNN
method with (B T Xi , Yi ), where B is given by gKDR, and choose the one that gives the least error.
The time complexity of the matrix inversions and the eigendecomposition for gKDR are O(n3 ),
which is prohibitive for large data sets. We can apply, however, low-rank approximation of Gram
matrices, such as incomplete Cholesky decomposition. The space complexity may be also a problem
of gKDR, since (?kX (Xi ))ni=1 has n2 ? m dimension. In the case of Gaussian kernel, where
?
1
a
a
2
2
?xa kX (Xj , x)|x=Xi = ? 2 (Xj ? Xi ) exp(?kXj ? Xi k /(2? )), we have a way of reducing
the necessary memory by low rank approximation. Let GX ? RRT and GY ? HH T be the
low rank approximation with rx = rkR, ry = rkH (rx , ry < n, m). With the notation F :=
1
a
(GX + n?n In )?1 H and ?as
i = ? 2 Xi Ris , we have, for 1 ? a, b ? m,
Prx as Pn
P
P
P rx
Pn
ry
? n,ab = n
M
?t ?t , ?t =
Ris
?as Fjt ?
Rjs Fjt .
?
i=1
t=1 ia ib
ia
s=1
j=1
j
s=1
i
j=1
With this method, the complexity is O(nmr) in space and O(nm2 r) in time (r = max{rx , ry }),
which is much more efficient in memory than straightforward implementation.
We introduce two variants of gKDR-FEX. First, since accurate nonparametric estimation with highdimensional X is not easy, we propose a method for decreasing the dimensionality iteratively. Using
gKDR-FEX, we first find a matrix B1 of dimensionality d1 larger than the target d, project data Xi
(1)
(1)
onto the subspace as Zi = B1T Xi , find the projection matrix B2 (d1 ? d2 matrix) for Zi onto a
d2 (d2 < d1 ) dimensional subspace, and repeat this process. We call this method gKDR-FEXi.
? n are of rank L
Second, if Y takes only L points as in classification, the Gram matrix GY and thus M
at most (see Eq. (8)), which is a strong limitation of gKDR. Note that this problem is shared by many
linear dimension reduction methods including CCA and slice-based methods. To solve this problem,
cn (x) over the points x = Xi instead of the average M
? n . By
we propose to use the variation of M
b[a] given by the eigenvectors of
partitioning {1, . . . , n} into T1 , . . . , T` , the projection matrices B
P`
1
c[a] = P
c
b
b bT
M
i?Ta M (Xi ) are used to define P = `
a=1 B[a] B[a] . The estimator of B is then given
by the top d eigenvectors of Pb. We call this method gKDR-FEXv.
2.4
Theoretical analysis of gKDR
We have derived the gKDR method based on the necessary condition of EDR space. The following
theorem shows that it is also sufficient, if kY is characteristic. A positive definite kernel k on a
4
(A) n = 100
(A) n = 200
(B) n = 100
(B) n = 200
(C) n = 200
(C) n = 400
gKDR
-FEX
0.1989
0.1264
0.1500
0.0755
0.1919
0.1346
gKDR
-FEXi
0.1639
0.0995
0.1358
0.0750
0.2322
0.1372
gKDR
-FEXv
0.2002
0.1287
0.1630
0.0802
0.1930
0.1369
IADE
0.1372
0.0857
0.1690
0.0940
0.7724
0.7863
SIR II
0.2986
0.2077
0.3137
0.2129
0.7326
0.7167
KDR
0.2807
0.1175
0.2138
0.1440
0.1479
0.0897
gKDR-FEX
+KDR
0.0883
0.0501
0.1076
0.0506
0.1285
0.0893
Table 1: gKDE-FEX for synthetic data: mean discrepancies over 100 runs.
measurable space is characteristic if EP [k(?, X)] = EQ [k(?, X)] means P = Q, i.e., the mean of
feature vector uniquely determines a probability [9, 20]. Examples include Gaussian kernel.
In the following theoretical results, we assume (i) ?kX (?, x)/?xa ? R(CXX ) (a = 1, . . . , m), (ii)
E[kY (y, X)|X = ?] ? HX for any y ? Y, and (iii) E[g(Y )|B T X = z] is a differentiable function
of z for any g ? HY and the linear functional g 7? ?E[g(Y )|B T X = z]/?z is continuous for any z.
In the sequel, the subspace spanned by the columns of B is denoted by Span(B), and the Frobenius
norm of a matrix M by kM kF . The proofs are given in Supplements.
Theorem 1. In addition to the above assumptions (i)-(iii), assume that the kernel kY is characteristic. If the eigenspaces for the non-zero eigenvalues of E[M (X)] are included in Span(B), then Y
and X are conditionally independent given B T X.
cn (x) and M
? n.
We can obtain the rate of consistency for M
?+1
X (?,x)
Theorem 2. In addition to (i)-(iii), assume that ?k?x
? R(CXX
) (a = 1, . . . , m) for some
a
1
1
? ? 0, and E[kY (y, Y )|X = ?] ? HX for every y ? Y. Then, for ?n = n? max{ 3 , 2?+2 } , we have
2?+1
cn (x) ? M (x) = Op n? min{ 13 , 4?+4 }
M
for every x ? X as n ? ?. If further E[kM (X)k2F ] < ? and
? n ? E[M (X)] in the same order as above.
EkhaX kHX < ?, then M
?kX (?,x)
?xa
?+1 a
= CXX
hx with
Note that, assuming that the eigenvalues of M (x) or E[M (X)] are all distinct, the convergence
of matrices implies the convergence of the eigenvectors [22], thus the estimator of gKDR-FEX is
consistent to the subspace given by the top eigenvectors of E[M (X)].
2.5
Experiments with gKDR-FEX
We always use the Gaussian kernel k(x, x
?) = exp(? 2?1 2 kx? x
?k2 ) in the kernel method below. First
we use three synthetic data
? to verify the basic performance of gKDR-FEX(i,v). The data are generated by (A): Y = Z sin( 5Z)+W , Z = ?15 (1, 2, 0, . . . , 0)T X, (B): Y = (Z13 +Z2 )(Z1 ?Z23 )+W ,
Z1 = ?12 (1, 1, 0, . . . , 0)T X, Z2 = ?12 (1, ?1, 0, . . . , 0)T X, where 10-dimensional X is generated
by the uniform distribution on [?1, 1]10 and W is independent noise with N (0, 10?2 ), and (C):
Y = Z 4 E, Z = (1, 0, . . . , 0)T X, where each component of 10-dimensional X is independently
generated by the truncated normal distribution N (0, 1/4) ? I[?1,1] and E ? N (0, 1) is a multiplicative noise. The discrepancy between the estimator B and the true projector B0 is measured by
kB0 B0T (Im ? BB T )kF /d. For choosing the parameter ? in Gaussian kernel and the regularization
parameter ?n , the CV in Sec. 2.3 with kNN (k = 5, manually chosen to optimize the results) is
used with 8 different values given by c?med (0.5 ? c ? 10), where ?med is the median of pairwise
distances of data [10], and ` = 4, 5, 6, 7 for ?n = 10?` (a similar strategy is used for the CV below).
We compare the results with those of IADE, SIR II [13], and KDR. The IADE has seven parameters
[11], and we tuned two of them (h1 and ?min ) manually to optimize the performance. For SIR II,
we tried several numbers of slices, and chose the one that gave the best result. From Table 1, we see
that gKDR-FEX(i,v) show much better results than SIR II in all the cases. The IADE works better
than these methods for (A), while for (B) and (C) it works worse. Since (C) has multiplicative noise,
the IADE does not obtain meaningful estimation. The KDR attains higher accuracy for (C), but less
accurate for (A) and (B) with n = 100; this undesired result is caused by failure of optimization in
5
80
75
70
3
5
7
9
Dimensionality
11
13
(H) Heart Disease
95
95
Classification rate (%)
Classification rate (%)
Classification rate (%)
100
gKDR?v
KDR
All variables
85
90
85
80
gKDR?v
KDR
All variables
75
70
3 5
10
15
20
Dimensionality
85
gKDR?v
KDR
All variables
80
34
(I) Ionoshpere
(m:13, ntr :129, ntest :148)
90
3 5
10
15
20
Dimensionality
30
(B) Breast-cancer-Wisconsin
(m:34, ntr :151, ntest :200)
(m:30, ntr :200, ntest :369)
Figure 1: Classification accuracy with gKDR-v and KDR for binary classification problems. m, ntr
and ntest are the dimension of X, training data size, and testing data size, respectively.
Dim.
gKDR + kNN
gKDR-v + kNN
CCA + kNN
SIR-II + kNN
gKDR + SVM
gKDR-v + SVM
CCA + SVM
10
13.53
13.15
22.77
77.42
14.43
16.87
13.09
20
4.55
4.55
6.74
70.11
5.00
4.75
6.54
30
?
4.81
?
63.44
?
3.85
?
40
?
5.26
?
52.66
?
3.59
?
50
?
5.58
?
50.61
?
3.08
?
L
gKDR
+SVM
10
20
30
40
50
12.0
16.2
18.0
21.8
19.5
Corr
+SVM
(500)
15.7
30.2
29.2
35.4
41.1
Corr
+SVM
(2000)
8.3
18.0
24.0
25.0
29.0
Table 2: Left: ISOLET - classification errors for test data (percentage). Right: Amazon Reviews 10-fold cross-validation errors (%) for classification
some runs (see Supplements for error bars). We also used the results of gKDR-FEX as the initial
state for KDR, which improved the accuracy significantly for (A) and (B). Note however that these
data sets are very small in size and dimension, and KDR is not applicable to large data used later.
One way of evaluating dimension reduction methods in supervised learning is to consider the classification or regression accuracy after projecting data onto the estimated subspaces. We next used three
data sets for binary classification, heart-disease (H), ionoshpere (I), and breast-cancer-Wisconsin
(B), from UCI repository [7], and evaluated the classification rates of gKDR-FEXv with kNN classifiers (k = 7). We compared them with KDR, as KDR shows high accuracy for small data sets.
From Fig. 1, we see gKDR-FEXv shows competitive accuracy with KDR: slightly worse for (I), and
slightly better for (B). The computation of gKDR-FEXv for these data sets can be much faster than
that of KDR. For each parameter set, the computational time of gKDR vs KDR was, in (H) 0.044
sec / 622 sec (d = 11), in (I) 0.l03 sec / 84.77 sec (d = 20), and in (B) 0.116 sec / 615 sec (d = 20).
The next two data sets taken from UCI repository are larger in the sample size and dimensionality,
for which the optimization of KDR is difficult to apply. The first one is ISOLET, which provides
617 dimensional continuous features of speech signals to classify 26 alphabets. In addition to 6238
training data, 1559 test data are separately provided. We evaluate the classification errors with the
kNN classifier (k = 5) and 1-vs-1 SVM to see the effectiveness of the estimated subspaces (see
Table 2). From the information on the data at the UCI repository, the best performance with neural
networks and C4.5 with ECOC are 3.27% and 6.61%, respectively. In comparison with these results,
the low dimensional subspaces found by gKDR-FEX and gKDR-FEXv maintain the information for
classification effectively. SIR-II does not find meaningful features.
The second data set is author identification of Amazon commerce reviews with 10000 dimensional
linguistic features. The total number of authors is 50, and 30 reviews were collected for each author;
the total size of data is thus 1500. We varied the used number of authors (L) to make different levels
of difficulty for the tasks. The reduced dimensionality by gKDR-FEX is set to the same as L, and the
10-fold CV errors with data projected on the estimated EDR space are evaluated using 1-vs-1 SVM.
PL
As comparison, the squared sum of variable-wise Pearson correlations, `=1 Corr[X a , Y ` ]2 , is also
used for choosing explanatory variables (a = 1, . . . , 10000). Such variable selection methods with
Pearson correlation are popularly used for very high dimensional data. The variables with top 500
and 2000 correlations are used to make SVM classifiers. As we can see from Table 2, the gKDRFEX gives much more effective subspaces for regression than the Pearson correlation method, when
6
the number of authors is large. The creator of the data set has also reported the classification result
with a neural network model [15]; for 50 authors, the 10-fold CV error with 2000 selected variables
is 19.51%, which is similar to the gKDR-FEX result with only 50 linear features.
3
Variable selection with gKDR
In recent years, extensive studies have been done on variable selection with a sparseness penalty
([6, 16, 18, 23?27, 29, 30] among many others). In supervised setting, these studies often consider
some specific model for the regression such as least square or logistic regression. While consistency
and oracle property have been also established for many methods, the assumption that there is a true
parameter in the model may not hold in practice, and thus a strong restriction of the methods. It
is then important to consider more flexible ways of variable selection without assuming any parametric model on the regression. The gKDR approach is appealing to this problem, since it realizes
conditional independence without strong assumptions for regression or distribution of variables.
Chen et al. [2] recently proposed the Coordinate-Independent Sparse (CIS) method, which is a semiparametric method for sparse variable selection. In CIS, the linear feature B T X is assumed with
some rows of B zero, but no parametric model is specified for regression. We wish to estimate B so
that the zero-rows should be estimated as zeros. This is achieved by imposing the sparseness penalty
of the group LASSO [29] in combination with an objective function of linear feature extraction
written in the form of eigenproblem such as SIR and PFC [3].
We follow the CIS method for our variable selection with gKDR; since the gKDR is given by the
? n , the CIS method is applied straightforwardly. The significance of our
eigenproblem with matrix M
method is that the gKDR formulates the conditional independence of Y and X given B T X, while
the existing CIS-based methods in [2] realize only weaker conditions under strong assumptions.
3.1
Sparse variable selection with gKDR
Throughout this section, it is assumed that the true probability satisfies Eq. (1) with B = B0 =
T
T
)T , and with some 1 ? q ? m the j-th row v0j is non-zero for j ? q and v0j = 0
, . . . , v0m
(v01
T T
) , where bi is the i-th
for j ? q + 1. The projection matrix is B = (b1 , . . . , bd ) = (v1T , . . . , vm
column and vj is the j-th row. The proposed variable selection method, gKDR-VS, estimates B by
m
i
h
X
? n B] +
b? = arg min
?i kvi k ,
(9)
?Tr[B T M
B
B:B T B=Id
i=1
where kvj k is the Euclidean norm and ? = (?1 , . . . , ?m ) ? Rm
+ is the regularization coefficients.
To optimize Eq. (9), as in [2], we used the local quadratic approximation [6], which is simple and
fast. We used the matlab code provided at the homepage of X. Chen.
The choice of ? is crucial on the practical performance of sparse variable selection. As a theoretical
guarantee, we will show that some asymptotic condition provides model consistency. In practice, as
in the Adaptive Lasso [30], it is suitable to consider ? = ?(?) define by
?i = ?k?
vi k?r
?0 , the solution to gKDR without
? i is the row vector of B
where ? and r are positive numbers, and v
T ?
?
penalty, i.e., B0 = arg minB T B=Id ?Tr[B Mn B]. We used r = 1/2 for all of our experiments.
To choose the parameter ?, a BIC-based method is often used in sparse variable selection [27, 31]
with theoretical guarantee of model consistency. We use a BIC-type method for choosing ? by
minimizing
log n
? b
bT M
BIC? = ?Tr[B
,
(10)
?(?) n B?(?) ] + Cn df?
n
b?(?) with p the number of non-zero rows in
where df? = d(p ? d) is the degree of freedom of B
b
B?(?) , and Cn is a positive number of Op (1). We used Cn = ?1 log log(m) with ?1 is the largest
? n . The log log(m) factor is used in [27], where increasing number of variables is
eigenvalue of M
bT M
? nB
b? ]; we use CV for choosing the
discussed, and ?1 is introduced to adjust the scale of Tr[B
?
bT M
? nB
b? ] is not
hyperparameters (kernel and regularization coefficient), in which the values of Tr[B
?
normalized well for different choices.
7
(A) n = 60
(A) n = 120
(B) n = 100
(B) n = 200
gKDR
-VS
.94/.99/75
1.0/1.0/98
.92/.84/63
.98/.89/75
CIS
-SIR
.89/1.0/65
.99/1.0/97
.19/.85/1
.18/.85/1
Table 3: gKDR-VS and CIS-SIR
with synthetic data (ratio of nonzeros in 1 ? j ? q / ratio of zeros in q + 1 ? j ? m / number of
correct models among 100 runs).
3.2
Method
CRIM
ZN
INDUS
CHAS
NOX
RM
AGE
DIS
RAD
TAX
PTRATIO
B
LSTAT
gKDR-VS
0
0
0
0
0
0
0
0
0
0
0.896
0.393
0
0
-0.169
0.022
0.018
-0.000
0
0
-0.376
0.919
0
0
-0.165
0.017
CIS-SIR
0
0
-0.000
-0.008
0
0
0
0
0
0
-1.00
-1.253
0.005
-0.022
0
0
0
0
0.001
-0.001
0.049
0.003
-0.001
0.002
0.043
-0.114
CIS-PFC
0
0
0
0
0
0
0
0
0
0
1.045
-1.390
-0.003
-0.011
0
0
0
0
-0.001
-0.005
-0.038
0.007
0.001
0.005
-0.043
-0.113
Table 4: Boston Housing Data: estimated sparse EDR.
Theoretical results on gKDR-VS
This subsection shows the model consistency of the gKDR-VS. All the proofs are shown in Supplements. Let ?n = max{?j | 1 ? j ? q} and ?n = min{?j | q + 1 ? j ? m}. The eigenvalues of
M = E[M (X)] are ?1 ? . . . ? ?m ? 0. For two m ? d matrices Bi (i = 1, 2) with BiT Bi = Id ,
we define D(B1 , B2 ) = kB1 B1T ? B2 B2T k, where k ? k is the operator norm.
? n ? M kF = Op (n?? ) for some ? > 0. If n? ?n ? 0 as n ? ? and
Theorem 3. Suppose kM
b? in Eq. (9) satisfies D(B
b? , B0 ) = Op (n?? ) as n ? ?.
?q > ?q+1 , then the estimator B
? n converges to M at the rate Op (n?? ) with
We saw in Theorem 2 that under some conditions M
b
1/4 ? ? ? 1/3. Thus Theorem 3 shows that B? is also consistent of the same rate.
Theorem 4. In addition to the assumptions in Theorem 3, assume n? ?n ? ? as n ? ?. Then,
b? .
bj is the j-th row of B
for all q + 1 ? j ? m, Pr(b
vj = 0) ? 1 as n ? ?, where v
3.3
Experiments with gKDR-VS
We first apply the gKDR-VS with d = 1 to synthetic data generated by the following two models:
(A): Y = X 1 + X 2 + X 3 + W and (B): Y = (X 1 + X 2 + X 3 )4 W , where the noise W follows
N (0, 1). For (A), X = (X 1 , . . . , X 24 ) is generated by N (0, ?) with ?ij = (1/2)|i?j| (1 ? i, j ?
24), and for (B) X = (X 1 , . . . , X 10 ) by N (0, 4I10 ). Note that (B) includes multiplicative noise,
which cannot be handled by many dimension reduction methods. In comparison, the CIS method
with SIR is also applied to the same data. The regularization parameter of CIS-SIR is chosen by
BIC described in [2]. While both the methods work effectively for (A), only gKDR-VS can handle
the multiplicative noise of (C).
The next experiment uses Boston Housing data, which has been often used for variable selection.
The response Y is the median value of homes in each tract, and thirteen variables are used to explain
it. The detail of the variables is described in Supplements, Sec. E. The results of gKDR-VS and CISSIR / CIS-PFC with d = 2 are shown in Table 4. The variables selected by gKDR-VS are RM, DIS,
RAD, PTRATIO and LSTAT, which are slightly different from the CIS methods. In a previous study
[1], the four variables RM, TAX, PTRATIO and LSTAT are considered to have major contribution.
4
Conclusions
We have proposed a gradient-based kernel approach for dimension reduction in supervised learning. The method is based on the general kernel formulation of conditional independence, and thus
has wide applicability without strong restrictions on the model or variables. The linear feature
extraction, gKDR-FEX, finds effective features with simple eigendecomposition, even when other
conventional methods are not applicable by multiplicative noise or high-dimensionality. The consistency is also guaranteed. We have extended the method to variable selection (gKDR-VS) with a
sparseness penalty, and demonstrated its promising performance with synthetic and real world data.
The model consistency has been also proved.
Acknowledgements. KF has been supported in part by JSPS KAKENHI (B). 22300098.
8
References
[1] L. Breiman and J. Friedman. Estimating optimal transformations for multiple regression and correlation.
J. Amer. Stat. Assoc., 80:580?598, 1985.
[2] X. Chen, C. Zou, and R. Dennis Cook. Coordinate-independent sparse sufficient dimension reduction and
variable selection. Ann. Stat., 38(6):3696?3723, 2010.
[3] R. Dennis Cook and L. Forzani. Principal fitted components for dimension reduction in regression. Statistical Science, 23(4):485?501, 2008.
[4] R. Dennis Cook and S. Weisberg. Discussion of Li (1991). J. Amer. Stat. Assoc., 86:328?332, 1991.
[5] J. Fan and I. Gijbels. Local Polynomial Modelling and its Applications. Chapman and Hall, 1996.
[6] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer.
Stat. Assoc., 96(456):1348?1360, 2001.
[7] A. Frank and A. Asuncion. UCI machine learning repository, [http://archive.ics.uci.edu/ml]. Irvine, CA:
University of California, School of Information and Computer Science. 2010.
[8] K. Fukumizu, F.R. Bach, and M.I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. JMLR, 5:73?99, 2004.
[9] K. Fukumizu, F.R. Bach, and M.I. Jordan. Kernel dimension reduction in regression. Ann. Stat.,
37(4):1871?1905, 2009.
[10] A. Gretton, K. Fukumizu, C.H. Teo, L. Song, B. Sch?olkopf, and Alex Smola. A kernel statistical test of
independence. In Advances in NIPS 20, pages 585?592. 2008.
[11] M. Hristache, A. Juditsky, J. Polzehl, and V. Spokoiny. Structure adaptive approach for dimension reduction. Ann. Stat., 29(6):1537?1566, 2001.
[12] B. Li, H. Zha, and F. Chiaromonte. Contour regression: A general approach to dimension reduction. Ann.
Stat., 33(4):1580?1616, 2005.
[13] K.-C. Li. Sliced inverse regression for dimension reduction (with discussion). J. Amer. Stat. Assoc.,
86:316?342, 1991.
[14] K.-C. Li. On principal Hessian directions for data visualization and dimension reduction: Another application of Stein?s lemma. J. Amer. Stat. Assoc., 87:1025?1039, 1992.
[15] S. Liu, Z. Liu, J. Sun, and L. Liu. Application of synergetic neural network in online writeprint identification. Intern. J. Digital Content Technology and its Applications, 5(3):126?135, 2011.
[16] L. Meier, S. Van De Geer, and P. B?uhlmann. The group lasso for logistic regression. J. Royal Stat. Soc.:
Ser. B, 70(1):53?71, 2008.
[17] A.M. Samarov. Exploring regression structure using nonparametric functional estimation. J. Amer. Stat.
Assoc., 88(423):836?847, 1993.
[18] S. K. Shevade and S. S. Keerthi. A simple and efficient algorithm for gene selection using sparse logistic
regression. Bioinformatics, 19(17):2246?2253, 2003.
[19] L. Song, J. Huang, A. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions
with applications to dynamical systems. In Proc. ICML2009, pages 961?968. 2009.
[20] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?olkopf, and G.R.G. Lanckriet. Hilbert space
embeddings and metrics on probability measures. JMLR, 11:1517?1561, 2010.
[21] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008.
[22] G.W. Stewart and J.-Q. Sun. Matrix Perturbation Theory. Academic Press, 1990.
[23] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal Stat. Soc.: Ser. B, 58(1):pp.
267?288, 1996.
[24] H. Wang and C. Leng. Unified lasso estimation by least squares approximation. J. Amer. Stat. Assoc., 102
(479):1039?1048, 2007.
[25] H. Wang, G. Li, and C.-L. Tsai. Regression coefficient and autoregressive order shrinkage and selection
via the lasso. J. Royal Stat. Soc.: Ser. B, 69(1):63?78, 2007.
[26] H. Wang, G. Li, and C.-L. Tsai. On the consistency of SCAD tunign parameter selector. Biometrika, 94:
553?558, 2007.
[27] H. Wang, B. Li, and C. Leng. Shrinkage tuning parameter selection with a diverging number of parameters. J. Royal Stat. Soc.: Ser. B, 71(3):671?683, 2009.
[28] M. Wang, F. Sha, and M. Jordan. Unsupervised kernel dimension reduction. NIPS 23, pages 2379?2387.
2010.
[29] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J. Royal Stat.
Soc.: Ser. B, 68(1):49?67, 2006.
[30] H. Zou. The adaptive lasso and its oracle properties. J. Amer. Stat. Assoc., 101:1418?1429, 2006.
[31] C. Zou and X. Chen. On the consistency of coordinate-independent sparse estimation with BIC. J. Multivariate Analysis, 112:248?255, 2012.
9
| 4488 |@word repository:4 inversion:3 polynomial:1 norm:3 open:2 d2:3 km:3 tried:1 covariance:5 decomposition:1 tr:5 reduction:28 initial:1 liu:3 tuned:1 rkhs:4 pna:1 existing:7 z2:2 must:1 written:1 bd:1 realize:1 additive:1 numerical:1 v0j:2 midori:1 v:15 rrt:1 pursued:1 selected:3 prohibitive:1 cook:3 juditsky:1 provides:3 gx:7 yuan:1 prove:1 introduce:1 pairwise:1 expected:1 weisberg:1 ry:4 ecoc:1 v1t:1 decreasing:1 increasing:1 z13:1 project:1 estimating:3 bounded:1 xx:3 notation:2 provided:2 homepage:1 chas:1 prohibits:1 unified:1 transformation:1 guarantee:2 every:3 biometrika:1 classifier:3 rm:6 nmr:1 k2:1 partitioning:1 ser:5 assoc:8 yn:2 positive:9 t1:1 local:4 id:4 establishing:1 abuse:1 chose:1 challenging:1 range:1 bi:3 commerce:1 practical:1 yj:1 fex:16 testing:1 practice:2 definite:7 differs:1 b1t:2 empirical:2 significantly:1 projection:6 onto:4 cannot:1 selection:31 operator:8 nb:2 kb1:1 optimize:3 measurable:2 conventional:2 projector:1 restriction:2 demonstrated:1 straightforward:3 attention:1 independently:1 simplicity:1 identifying:1 amazon:2 estimator:9 isolet:2 spanned:2 deriving:1 handle:2 notion:1 variation:1 coordinate:3 target:1 suppose:2 us:3 lanckriet:1 particularly:1 nox:1 vein:1 ep:1 wang:5 cy:5 sun:2 decrease:1 disease:2 complexity:3 rigorously:1 depend:1 f2:2 basis:1 kxj:1 various:1 alphabet:1 distinct:1 fast:1 effective:5 choosing:5 pearson:3 larger:2 valued:2 solve:1 knn:9 kdr:20 online:1 housing:2 advantage:1 differentiable:4 eigenvalue:5 propose:6 product:1 relevant:1 uci:5 tax:2 ihy:2 frobenius:1 ky:10 crossvalidation:1 olkopf:2 convergence:2 optimum:1 tract:1 converges:2 derive:1 develop:1 ac:1 stat:17 measured:1 ij:1 school:1 nearest:1 op:5 b0:4 strong:11 soc:5 eq:13 kenji:1 christmann:1 implies:2 solves:2 direction:2 popularly:1 tokyo:1 correct:1 require:1 hx:12 f1:3 im:1 extension:2 pl:1 exploring:1 hold:5 gradientbased:1 considered:1 hall:1 normal:1 exp:2 ic:1 mapping:1 bj:1 major:1 estimation:11 proc:1 applicable:4 realizes:1 uhlmann:1 saw:1 teo:1 largest:1 grouped:1 successfully:1 tool:1 fukumizu:7 always:3 gaussian:4 kb0:1 pn:6 shrinkage:3 breiman:1 conjunction:1 linguistic:1 derived:1 focus:1 kakenhi:1 rank:4 modelling:1 likelihood:1 contrast:2 hkx:2 attains:1 dim:1 ptratio:3 bt:4 explanatory:1 relation:2 arg:2 among:3 classification:16 flexible:1 denoted:1 development:2 smoothing:1 construct:1 eigenproblem:4 extraction:14 manually:2 chapman:1 k2f:1 unsupervised:1 discrepancy:2 others:2 employ:1 modern:2 national:1 consisting:1 keerthi:1 maintain:1 ab:1 freedom:1 friedman:1 b0t:1 adjust:1 hg:2 accurate:3 injective:2 xy:1 necessary:2 respective:1 eigenspaces:1 unless:1 incomplete:1 euclidean:2 desired:1 gihy:1 theoretical:5 fitted:1 column:3 classify:1 formulates:1 stewart:1 zn:1 ordinary:1 applicability:2 subset:3 predictor:2 uniform:1 jsps:1 reported:1 straightforwardly:1 synthetic:5 cho:1 combined:2 fundamental:1 sequel:2 vm:1 regressor:5 continuously:4 kvj:1 squared:1 choose:2 huang:1 worse:2 derivative:3 yp:1 li:8 japan:1 de:1 gy:5 sec:13 b2:3 includes:1 coefficient:5 spokoiny:1 caused:1 depends:1 vi:1 multiplicative:6 h1:1 later:1 competitive:1 hf:2 zha:1 asuncion:1 contribution:1 square:3 ni:1 accuracy:7 cxy:5 characteristic:3 identification:2 rx:4 drive:1 explain:2 definition:1 failure:1 sriperumbudur:1 hf2:1 nonetheless:1 pp:1 involved:1 associated:1 proof:2 irvine:1 proved:1 subsection:1 dimensionality:10 cj:1 hilbert:6 ta:1 higher:1 supervised:7 follow:2 response:1 improved:1 formulation:2 done:2 evaluated:2 strongly:1 ihx:1 amer:8 xa:3 smola:2 correlation:6 shevade:1 steinwart:1 dennis:3 nonlinear:4 defines:1 logistic:3 verify:1 true:4 normalized:1 former:1 regularization:7 symmetric:1 iteratively:1 undesired:1 conditionally:1 sin:1 uniquely:2 ridge:1 demonstrate:1 wise:1 novel:3 recently:2 multinomial:1 functional:2 jp:1 extend:1 slight:1 discussed:1 significant:1 imposing:1 cv:8 tuning:1 consistency:9 mathematics:1 similarly:1 nonlinearity:1 stable:2 multivariate:1 recent:4 tikhonov:1 binary:2 yi:3 signal:1 ii:7 multiple:1 ntr:4 nonzeros:1 gretton:2 faster:1 academic:1 cross:1 bach:2 lin:1 variant:1 regression:33 basic:2 breast:2 expectation:1 df:2 metric:1 iteration:1 kernel:45 achieved:1 c1:1 addition:4 semiparametric:1 separately:1 median:2 tachikawa:1 crucial:1 sch:2 unlike:2 minb:1 archive:1 med:2 incorporates:2 nonconcave:1 effectiveness:1 jordan:3 call:5 extracting:1 noting:1 iii:3 easy:2 embeddings:2 independence:7 xj:4 zi:2 gave:1 bic:5 bandwidth:1 lasso:8 inner:1 idea:1 cn:9 indus:1 expression:1 handled:2 synergetic:1 penalty:8 song:2 speech:1 hessian:1 matlab:1 eigenvectors:7 nonparametric:4 stein:1 category:1 reduced:2 http:1 exist:1 percentage:1 canonical:1 singapore:2 estimated:9 lstat:3 tibshirani:1 discrete:1 group:2 four:1 pb:1 sum:1 year:1 gijbels:1 run:3 inverse:4 throughout:1 home:1 dy:3 cxx:16 z23:1 bit:1 chenlei:1 cca:3 b2t:1 guaranteed:1 fold:3 topological:1 quadratic:1 fan:2 oracle:4 polzehl:1 alex:1 n3:1 ri:2 hy:7 argument:1 span:2 min:4 separable:1 px:1 structured:2 combination:4 scad:2 slightly:3 appealing:1 rkr:1 explained:1 projecting:1 pr:1 heart:2 taken:1 computationally:2 equation:1 visualization:1 discus:2 fail:1 hh:1 crim:1 apply:3 hristache:1 appropriate:1 elliptic:1 i10:1 rkhss:1 denotes:2 top:4 include:1 creator:1 yx:1 ism:1 classical:1 objective:2 parametric:6 strategy:1 sha:1 gradient:19 subspace:10 distance:1 seven:1 collected:1 discriminant:1 assuming:2 code:1 ratio:2 minimizing:1 equivalently:1 difficult:3 thirteen:1 frank:1 implementation:1 fjt:2 finite:1 descent:1 truncated:1 extended:2 y1:2 rn:1 varied:1 reproducing:5 perturbation:1 arbitrary:2 introduced:1 namely:1 meier:1 specified:1 extensive:2 z1:2 rad:2 c4:1 california:1 nm2:1 established:1 nu:1 nip:2 beyond:1 bar:1 below:2 dynamical:1 including:3 memory:2 max:3 royal:5 ia:2 suitable:1 difficulty:1 regularized:1 mn:1 technology:1 irrespective:1 review:4 sg:1 acknowledgement:1 kf:4 asymptotic:1 sir:15 law:1 wisconsin:2 limitation:4 age:1 validation:1 eigendecomposition:3 digital:1 degree:1 sufficient:2 consistent:2 row:8 cancer:2 penalized:1 repeat:1 supported:1 dis:2 weaker:1 institute:1 wide:4 neighbor:1 characterizing:1 sparse:11 van:1 slice:2 overcome:1 dimension:29 xn:4 gram:3 evaluating:1 world:1 contour:1 autoregressive:1 author:6 adaptive:3 projected:1 leng:3 bb:1 selector:1 gene:1 ml:1 uncentered:1 b1:3 assumed:3 xi:20 continuous:4 iterative:2 table:8 additionally:1 promising:1 ca:1 symmetry:1 necessarily:1 pfc:3 zou:3 domain:1 vj:2 significance:1 noise:9 hyperparameters:1 n2:1 prx:1 sliced:2 x1:4 fig:1 borel:1 wish:2 lie:1 candidate:1 ib:1 jmlr:2 theorem:8 specific:4 kvi:1 svm:9 virtue:1 ih:4 effectively:3 corr:3 ci:14 supplement:5 sparseness:6 kx:25 chen:6 boston:2 simply:1 intern:1 expressed:1 contained:4 applies:2 springer:1 satisfies:3 determines:1 rjs:1 conditional:10 goal:2 formulated:1 ann:4 careful:1 shared:1 fisher:1 content:1 experimentally:1 included:1 infinite:1 reducing:1 edr:9 principal:2 lemma:1 called:2 ade:1 total:2 ntest:4 experimental:1 geer:1 diverging:1 meaningful:2 highdimensional:2 cholesky:1 support:1 latter:1 confines:1 bioinformatics:1 khx:1 tsai:2 evaluate:1 d1:3 |
3,853 | 4,489 | Efficient coding provides a direct link between prior
and likelihood in perceptual Bayesian inference
Xue-Xin Wei and Alan A. Stocker?
Departments of Psychology and
Electrical and Systems Engineering
University of Pennsylvania
Philadelphia, PA-19104, U.S.A.
Abstract
A common challenge for Bayesian models of perception is the fact that the two
fundamental Bayesian components, the prior distribution and the likelihood function, are formally unconstrained. Here we argue that a neural system that emulates
Bayesian inference is naturally constrained by the way it represents sensory information in populations of neurons. More specifically, we show that an efficient
coding principle creates a direct link between prior and likelihood based on the
underlying stimulus distribution. The resulting Bayesian estimates can show biases away from the peaks of the prior distribution, a behavior seemingly at odds
with the traditional view of Bayesian estimation, yet one that has been reported
in human perception. We demonstrate that our framework correctly accounts for
the repulsive biases previously reported for the perception of visual orientation,
and show that the predicted tuning characteristics of the model neurons match
the reported orientation tuning properties of neurons in primary visual cortex.
Our results suggest that efficient coding is a promising hypothesis in constraining Bayesian models of perceptual inference.
1
Motivation
Human perception is not perfect. Biases have been observed in a large number of perceptual tasks
and modalities, of which the most salient ones constitute many well-known perceptual illusions. It
has been suggested, however, that these biases do not reflect a failure of perception but rather an observer?s attempt to optimally combine the inherently noisy and ambiguous sensory information with
appropriate prior knowledge about the world [13, 4, 14]. This hypothesis, which we will refer to as
the Bayesian hypothesis, has indeed proven quite successful in providing a normative explanation of
perception at a qualitative and, more recently, quantitative level (see e.g. [15]). A major challenge in
forming models based on the Bayesian hypothesis is the correct selection of two main components:
the prior distribution (belief) and the likelihood function. This has encouraged some to criticize the
Bayesian hypothesis altogether, claiming that arbitrary choices for these components always allow
for unjustified post-hoc explanations of the data [1].
We do not share this criticism, referring to a number of successful attempts to constrain prior beliefs
and likelihood functions based on principled grounds. For example, prior beliefs have been defined
as the relative distribution of the sensory variable in the environment in cases where these statistics
are relatively easy to measure (e.g. local visual orientations [16]), or where it can be assumed that
subjects have learned them over the course of the experiment (e.g. time perception [17]). Other
studies have constrained the likelihood function according to known noise characteristics of neurons
that are crucially involved in the specific perceptual process (e.g motion tuned neurons in visual cor?
http://www.sas.upenn.edu/ astocker/lab
1
world
neural representation
efficient
encoding
percept
Bayesian
decoding
Figure 1: Encoding-decoding framework. A stimulus representing a sensory variable ? elicits a firing
rate response R = {r1 , r2 , ..., rN } in a population of N neurons. The perceptual task is to generate a
?
good estimate ?(R)
of the presented value of the sensory variable based on this population response.
Our framework assumes that encoding is efficient, and decoding is Bayesian based on the likelihood
p(R|?), the prior p(?), and a squared-error loss function.
tex [18]). However, we agree that finding appropriate constraints is generally difficult and that prior
beliefs and likelihood functions have been often selected on the basis of mathematical convenience.
Here, we propose that the efficient coding hypothesis [19] offers a joint constraint on the prior and
likelihood function in neural implementations of Bayesian inference. Efficient coding provides a
normative description of how neurons encode sensory information, and suggests a direct link between measured perceptual discriminability, neural tuning characteristics, and environmental statistics [11]. We show how this link can be extended to a full Bayesian account of perception that
includes perceptual biases. We validate our model framework against behavioral as well as neural
data characterizing the perception of visual orientation. We demonstrate that we can account not
only for the reported perceptual biases away from the cardinal orientations, but also for the specific response characteristics of orientation-tuned neurons in primary visual cortex. Our work is a
novel proposal of how two important normative hypotheses in perception science, namely efficient
(en)coding and Bayesian decoding, might be linked.
2
Encoding-decoding framework
We consider perception as an inference process that takes place along the simplified neural encodingdecoding cascade illustrated in Fig. 11 .
2.1
Efficient encoding
Efficient encoding proposes that the tuning characteristics of a neural population are adapted to
the prior distribution p(?) of the sensory variable such that the population optimally represents the
sensory variable [19]. Different definitions of ?optimally? are possible, and may lead to different
results. Here, we assume an efficient representation that maximizes the mutual information between
the sensory variable and the population response. With this definition and an upper limit on the total
firing activity, the square-root of the Fisher Information must be proportional to the prior distribution [12, 21].
In order to constrain the tuning curves of individual neurons in the population we also impose a
homogeneity constraint, requiring that there exists a one-to-one mapping F (?) that transforms the
physical space with units ? to a homogeneous space with units ?? = F (?) in which the stimulus
distribution becomes uniform. This defines the mapping as
Z ?
F (?) =
p(?)d? ,
(1)
??
which is the cumulative of the prior distribution p(?). We then assume a neural population with identical tuning curves that evenly tiles the stimulus range in this homogeneous space. The population
provides an efficient representation of the sensory variable ? according to the above constraints [11].
? Fig. 2
The tuning curves in the physical space are obtained by applying the inverse mapping F ?1 (?).
1
In the context of this paper, we consider ?inferring?, ?decoding?, and ?estimating? as synonymous.
2
stimulus distribution
d
samples #
a
Fisher information
discriminability
and
average firing rates
and
b
firing rate [ Hz]
efficient encoding
F
likelihood function
F -1
likelihood
c
symmetric
asymmetric
homogeneous space
physical space
Figure 2: Efficient encoding constrains the likelihood function. a) Prior distribution p(?) derived
from stimulus statistics. b) Efficient coding defines the shape of the tuning curves in the physical
space by transforming a set of homogeneous neurons using a mapping F ?1 that is the inverse of
the cumulative of the prior p(?) (see Eq. (1)). c) As a result, the likelihood shape is constrained by
the prior distribution showing heavier tails on the side of lower prior density. d) Fisher information,
discrimination threshold, and average firing rates are all uniform in the homogeneous space.
illustrates the applied efficient encoding scheme, the mapping, and the concept of the homogeneous
space for the example of a symmetric, exponentially decaying prior distribution p(?). The key idea
here is that by assuming efficient encoding, the prior (i.e. the stimulus distribution in the world)
directly constrains the likelihood function. In particular, the shape of the likelihood is determined
by the cumulative distribution of the prior. As a result, the likelihood is generally asymmetric, as
shown in Fig. 2, exhibiting heavier tails on the side of the prior with lower density.
2.2
Bayesian decoding
Let us consider a population of N sensory neurons that efficiently represents a stimulus variable ?
as described above. A stimulus ?0 elicits a specific population response that is characterized by the
vector R = [r1 , r2 , ..., rN ] where ri is the spike-count of the ith neuron over a given time-window
? . Under the assumption that the variability in the individual firing rates is governed by a Poisson
process, we can write the likelihood function over ? as
p(R|?) =
N
Y
(? fi (?))ri
ri !
i=1
e?? fi (?) ,
(2)
with fi (?) describing the tuning curve of neuron i. We then define a Bayesian decoder ??LSE as
the estimator that minimizes the expected squared-error between the estimate and the true stimulus
value, thus
R
?p(R|?)p(?)d?
?
?LSE (R) = R
,
(3)
p(R|?)p(?)d?
where we use Bayes? rule to appropriately combine the sensory evidence with the stimulus prior
p(?).
3
Bayesian estimates can be biased away from prior peaks
Bayesian models of perception typically predict perceptual biases toward the peaks of the prior density, a characteristic often considered a hallmark of Bayesian inference. This originates from the
3
a
b
prior attraction
prior
prior attraction likelihood repulsion!
likelihood
c
prior
prior
repulsive bias
likelihood
likelihood mean
posterior mean
posterior mean
Figure 3: Bayesian estimates biased away from the prior. a) If the likelihood function is symmetric,
then the estimate (posterior mean) is, on average, shifted away from the actual value of the sensory
variable ?0 towards the prior peak. b) Efficient encoding typically leads to an asymmetric likelihood
function whose normalized mean is away from the peak of the prior (relative to ?0 ). The estimate
is determined by a combination of prior attraction and shifted likelihood mean, and can exhibit an
0
overall repulsive bias. c) If p(?0 )0 < 0 and the likelihood is relatively narrow, then (1/p(?)2 ) > 0
(blue line) and the estimate is biased away from the prior peak (see Eq. (6)).
common approach of choosing a parametric description of the likelihood function that is computationally convenient (e.g. Gaussian). As a consequence, likelihood functions are typically assumed to
be symmetric (but see [23, 24]), leaving the bias of the Bayesian estimator to be mainly determined
by the shape of the prior density, i.e. leading to biases toward the peak of the prior (Fig. 3a).
In our model framework, the shape of the likelihood function is constrained by the stimulus prior
via efficient neural encoding, and is generally not symmetric for non-flat priors. It has a heavier tail
on the side with lower prior density (Fig. 3b). The intuition is that due to the efficient allocation
of neural resources, the side with smaller prior density will be encoded less accurately, leading to a
broader likelihood function on that side. The likelihood asymmetry pulls the Bayes? least-squares
estimate away from the peak of the prior while at the same time the prior pulls it toward its peak.
Thus, the resulting estimation bias is the combination of these two counter-acting forces - and both
are determined by the prior!
3.1
General derivation of the estimation bias
In the following, we will formally derive the mean estimation bias b(?) of the proposed encodingdecoding framework. Specifically, we will study the conditions for which the bias is repulsive i.e.
away from the peak of the prior density.
We first re-write the estimator ??LSE (3) by replacing ? with the inverse of its mapping to the homo? The motivation for this is that the likelihood in the homogeneous
geneous space, i.e., ? = F ?1 (?).
space is symmetric (Fig. 2). Given a value ?0 and the elicited population response R, we can write
the estimator as
R
R ?1
?1 ?
?1 ?
?
?
?p(R|?)p(?)d?
F (?)p(R|F
(?))p(F ?1 (?))dF
(?)
?
R
?LSE (R) =
=
.
R
?1
?1
?1
?
?
?
p(R|?)p(?)d?
p(R|F (?))p(F (?))dF (?)
Calculating the derivative of the inverse function and noting that F is the cumulative of the prior
density, we get
1
?
? = (F ?1 (?))
? 0 d?? = 1 d?? = 1 d?? =
d?.
dF ?1 (?)
?1
?
F (?)0
p(?)
p(F (?))
Hence, we can simplify ??LSE (R) as
??LSE (R) =
R
?1 ?
?
F ?1 (?)p(R|F
(?))d??
.
R
? ??
p(R|F ?1 (?))d
With
? =R
K(R, ?)
?
p(R|F ?1 (?))
? ??
p(R|F ?1 (?))d
4
we can further simplify the notation and get
??LSE (R) =
Z
?
? ?? .
F ?1 (?)K(R,
?)d
(4)
? we marginalize (4) over the population
In order to get the expected value of the estimate, ??LSE (?),
response space S,
Z Z
? =
?
? ?dR
?
??LSE (?)
p(R)F ?1 (?)K(R,
?)d
S
Z
=
F
?1
?
(?)(
Z
?
p(R)K(R, ?)dR)d
?? =
Z
? ?)d
? ?,
?
F ?1 (?)L(
S
where we define
? =
L(?)
Z
?
p(R)K(R, ?)dR.
S
R
? ?? = 1. Due to the symmetry in this space, it can be shown that L(?)
? is
It follows that L(?)d
?
?
symmetric around the true stimulus value ?0 . Intuitively, L(?) can be thought as the normalized
average likelihood in the homogeneous space. We can then compute the expected bias at ?0 as
Z
? ?)d
? ?? ? F ?1 (??0 )
b(?0 ) = F ?1 (?)L(
(5)
? is defined as the inverse of the cumulative of an arbitrary
This is expression is general where F ?1 (?)
? is determined by the internal noise level.
prior density p(?) (see Eq. (1)) and the dispersion of L(?)
Assuming the prior density to be smooth, we expand F ?1 in a neighborhood (??0 ? h, ??0 + h) that
is larger than the support of the likelihood function. Using Taylor?s theorem with mean-value forms
of the remainder, we get
? = F ?1 (??0 ) + F ?1 (??0 )0 (?? ? ??0 ) + 1 F ?1 (??x )00 (?? ? ??0 )2 ,
F ?1 (?)
2
?
?
?
with ?x lying between ?0 and ?. By applying this expression to (5), we find
??0 +h
Z
b(?0 ) =
=
1
2
??0 ?h
Z
1 ?1 ? 00 ? ? 2 ? ? 1
F (?x )?? (? ? ?0 ) L(?)d? =
2
2
??0 +h
?(
??0 ?h
p(?x )0? ? ? 2 ? ? 1
)(? ? ?0 ) L(?)d? =
p(?x )3
4
??0 +h
Z
(
1
? ??
)0??(?? ? ??0 )2 L(?)d
?1
?
p(F (?x ))
(
1
? ?.
?
)0 (?? ? ??0 )2 L(?)d
p(?x )2 ?
??0 ?h
Z
??0 +h
??0 ?h
In general, there is no simple rule to judge the sign of b(?0 ). However, if the prior is monotonic
on the interval F ?1 ((??0 ? h, ??0 + h)), then the sign of ( p(?1x )2 )0 is always the same as the sign of
( p(?10 )2 )0 . Also, if the likelihood is sufficiently narrow we can approximate ( p(?1x )2 )0 by ( p(?10 )2 )0 ,
and therefore approximate the bias as
b(?0 ) ? C(
1
)0 ,
p(?0 )2
(6)
where C is a positive constant.
The result is quite surprising because it states that as long as the prior is monotonic over the support
of the likelihood function, the expected estimation bias is always away from the peaks of the prior!
3.2
Internal (neural) versus external (stimulus) noise
The above derivation of estimation bias is based on the assumption that all uncertainty about the
sensory variable is caused by neural response variability. This level of internal noise depends on the
response magnitude, and thus can be modulated e.g. by changing stimulus contrast. This contrastcontrolled noise modulation is commonly exploited in perceptual studies (e.g. [18]). Internal noise
will always lead to repulsive biases in our framework if the prior is monotonic. If internal noise is
low, the likelihood is narrow and thus the bias is small. Increasing internal noise leads to increasingly
5
larger biases up to the point where the likelihood becomes wide enough such that monotonicity of
the prior over the support of the likelihood is potentially violated.
Stimulus noise is another way to modulate the noise level in perception (e.g. random-dot motion
stimuli). Such external noise, however, has a different effect on the shape of the likelihood function
as compared to internal noise. It modifies the likelihood function (2) by convolving it with the noise
kernel. External noise is frequently chosen as additive and symmetric (e.g. zero-mean Gaussian). It
is straightforward to prove that such symmetric external noise does not lead to a change in the mean
of the likelihood, and thus does not alter the repulsive effect induced by its asymmetry. However, by
increasing the overall width of the likelihood, the attractive influence of the prior increases, resulting
in an estimate that is closer to the prior peak than without external noise2 .
4
Perception of visual orientation
We tested our framework by modelling the perception of visual orientation. Our choice was based
on the fact that i) we have pretty good estimates of the prior distribution of local orientations in
natural images, ii) tuning characteristics of orientation selective neurons in visual cortex are wellstudied (monkey/cat), and iii) biases in perceived stimulus orientation have been well characterized.
We start by creating an efficient neural population based on measured prior distributions of local
visual orientation, and then compare the resulting tuning characteristics of the population and the
predicted perceptual biases with reported data in the literature.
4.1
Efficient neural model population for visual orientation
Previous studies measured the statistics of the local orientation in large sets of natural images and
consistently found that the orientation distribution is multimodal, peaking at the two cardinal orientations as shown in Fig. 4a [16, 20]. We assumed that the visual system?s prior belief over orientation
p(?) follows this distribution and approximate it formally as
p(?) ? 2 ? | sin(?)| (black line in Fig. 4b) .
(7)
Based on this prior distribution we defined an efficient neural representation for orientation. We
assumed a population of model neurons (N = 30) with tuning curves that follow a von-Mises
distribution in the homogeneous space on top of a constant spontaneous firing rate (5 Hz). We then
? to all these tuning curves to get the corresponding tuning
applied the inverse transformation F ?1 (?)
curves in the physical space (Fig. 4b - red curves), where F (?) is the cumulative of the prior (7). The
concentration parameter for the von-Mises tuning curves was set to ? ? 1.6 in the homogeneous
space in order to match the measured average tuning width (? 32 deg) of neurons in area V1 of the
macaque [9].
4.2
Predicted tuning characteristics of neurons in primary visual cortex
The orientation tuning characteristics of our model population well match neurophysiological data
of neurons in primary visual cortex (V1). Efficient encoding predicts that the distribution of neurons?
preferred orientation follows the prior, with more neurons tuned to cardinal than oblique orientations
by a factor of approximately 1.5. A similar ratio has been found for neurons in area V1 of monkey/cat [9, 10]. Also, the tuning widths of the model neurons vary between 25-42 deg depending
on their preferred tuning (see Fig. 4c), matching the measured tuning width ratio of 0.6 between
neurons tuned to the cardinal versus oblique orientations [9].
An important prediction of our model is that most of the tuning curves should be asymmetric. Such
asymmetries have indeed been reported for the orientation tuning of neurons in area V1 [6, 7, 8].
We computed the asymmetry index for our model population as defined in previous studies [6, 7],
and plotted it as a function of the preferred tuning of each neuron (Fig. 4d). The overall asymmetry
index in our model population is 1.24 ? 0.11, which approximately matches the measured values for
neurons in area V1 of the cat (1.26 ? 0.06) [6]. It also predicts that neurons tuned to the cardinal and
oblique orientations should show less symmetry than those tuned to orientations in between. Finally,
2
Note, that these predictions are likely to change if the external noise is not symmetric.
6
a
b
25
firing rate(Hz)
0
orientation(deg)
asymmetry vs. tuning width
1.0
2.0
90
2.0
e
asymmetry
1.0
0
asymmetry index
50
30
width (deg)
10
90
preferred tuning(deg)
-90
0
d
0
0
90
asymmetry index
0
orientation(deg)
tuning width
-90
0
0
probability
0
-90
c
efficient representation
0.01
0.01
image statistics
-90
0
90
preferred tuning(deg)
25
30
35
40
tuning width (deg)
Figure 4: Tuning characteristics of model neurons. a) Distribution of local orientations in natural
images, replotted from [16]. b) Prior used in the model (black) and predicted tuning curves according
to efficient coding (red). c) Tuning width as a function of preferred orientation. d) Tuning curves
of cardinal and oblique neurons are more symmetric than those tuned to orientations in between. e)
Both narrowly and broadly tuned neurons neurons show less asymmetry than neurons with tuning
widths in between.
neurons with tuning widths at the lower and upper end of the range are predicted to exhibit less
asymmetry than those neurons whose widths lie in between these extremes (illustrated in Fig. 4e).
These last two predictions have not been tested yet.
4.3
Predicted perceptual biases
Our model framework also provides specific predictions for the expected perceptual biases. Humans
show systematic biases in perceived orientation of visual stimuli such as e.g. arrays of Gabor patches
(Fig. 5a,d). Two types of biases can be distinguished: First, perceived orientations show an absolute
bias away from the cardinal orientations, thus away from the peaks of the orientation prior [2, 3].
We refer to these biases as absolute because they are typically measured by adjusting a noise-free
reference until it matched the orientation of the test stimulus. Interestingly, these repulsive absolute
biases are the larger the smaller the external stimulus noise is (see Fig. 5b). Second, the relative bias
between the perceived overall orientations of a high-noise and a low-noise stimulus is toward the
cardinal orientations as shown in Fig. 5c, and thus toward the peak of the prior distribution [3, 16].
The predicted perceptual biases of our model are shown Fig. 5e,f. We computed the likelihood
function according to (2) and used the prior in (7). External noise was modeled by convolving
the stimulus likelihood function with a Gaussian (different widths for different noise levels). The
predictions well match both, the reported absolute bias away as well as the relative biases toward
the cardinal orientations. Note, that our model framework correctly accounts for the fact that less
external noise leads to larger absolute biases (see also discussion in section 3.2).
5
Discussion
We have presented a modeling framework for perception that combines efficient (en)coding and
Bayesian decoding. Efficient coding imposes constraints on the tuning characteristics of a population of neurons according to the stimulus distribution (prior). It thus establishes a direct link
between prior and likelihood, and provides clear constraints on the latter for a Bayesian observer
model of perception. We have shown that the resulting likelihoods are in general asymmetric, with
7
absolute bias (data)
b
c
relative bias (data)
-4
0
bias(deg)
4
a
low-noise stimulus
-90
e
90
absolute bias (model)
low external noise
high external noise
3
high-noise stimulus
-90
f
0
90
relative bias (model)
0
bias(deg)
d
0
attraction
-3
repulsion
-90
0
orientation (deg)
90
-90
0
orientation (deg)
90
Figure 5: Biases in perceived orientation: Human data vs. Model prediction. a,d) Low- and highnoise orientation stimuli of the type used in [3, 16]. b) Humans show absolute biases in perceived
orientation that are away from the cardinal orientations. Data replotted from [2] (pink squares)
and [3] (green (black) triangles: bias for low (high) external noise). c) Relative bias between stimuli
with different external noise level (high minus low). Data replotted from [3] (blue triangles) and [16]
(red circles). e,f) Model predictions for absolute and relative bias.
heavier tails away from the prior peaks. We demonstrated that such asymmetric likelihoods can lead
to the counter-intuitive prediction that a Bayesian estimator is biased away from the peaks of the
prior distribution. Interestingly, such repulsive biases have been reported for human perception of
visual orientation, yet a principled and consistent explanation of their existence has been missing so
far. Here, we suggest that these counter-intuitive biases directly follow from the asymmetries in the
likelihood function induced by efficient neural encoding of the stimulus. The good match between
our model predictions and the measured perceptual biases and orientation tuning characteristics of
neurons in primary visual cortex provides further support of our framework.
Previous work has suggested that there might be a link between stimulus statistics, neuronal tuning characteristics, and perceptual behavior based on efficient coding principles, yet none of these
studies has recognized the importance of the resulting likelihood asymmetries [16, 11]. We have
demonstrated here that such asymmetries can be crucial in explaining perceptual data, even though
the resulting estimates appear ?anti-Bayesian? at first sight (see also models of sensory adaptation [23]).
Note, that we do not provide a neural implementation of the Bayesian inference step. However,
we and others have proposed various neural decoding schemes that can approximate Bayes? leastsquares estimation using efficient coding [26, 25, 22]. It is also worth pointing out that our estimator
is set to minimize total squared-error, and that other choices of the loss function (e.g. MAP estimator) could lead to different predictions. Our framework is general and should be directly applicable
to other modalities. In particular, it might provide a new explanation for perceptual biases that are
hard to reconcile with traditional Bayesian approaches [5].
Acknowledgments
We thank M. Jogan and A. Tank for helpful comments on the manuscript. This work was partially
supported by grant ONR N000141110744.
8
References
[1] M. Jones, and B. C. Love. Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences, 34, 169?231,2011.
[2] D. P. Andrews. Perception of contours in the central fovea. Nature, 205:1218- 1220, 1965.
[3] A. Tomassini, M. J.Morgam. and J. A. Solomon. Orientation uncertainty reduces perceived obliquity.
Vision Res, 50, 541?547, 2010.
[4] W. S. Geisler, D. Kersten. Illusions, perception and Bayes. Nature Neuroscience, 5(6):508- 510, 2002.
[5] M. O. Ernst Perceptual learning: inverting the size-weight illusion. Current Biology, 19:R23- R25, 2009.
[6] G. H. Henry, B. Dreher, P. O. Bishop. Orientation specificity of cells in cat striate cortex. J Neurophysiol,
37(6):1394-409,1974.
[7] D. Rose, C. Blakemore An analysis of orientation selectivity in the cat?s visual cortex. Exp Brain Res.,
Apr 30;20(1):1-17, 1974.
[8] N. V. Swindale. Orientation tuning curves: empirical description and estimation of parameters. Biol
Cybern., 78(1):45-56, 1998.
[9] R. L. De Valois, E. W. Yund, N. Hepler. The orientation and direction selectivity of cells in macaque visual
cortex. Vision Res.,22, 531544,1982.
[10] B. Li, M. R. Peterson, R. D. Freeman. The oblique effect: a neural basis in the visual cortex. J. Neurophysiol., 90, 204217, 2003.
[11] D. Ganguli and E.P. Simoncelli. Implicit encoding of prior probabilities in optimal neural populations. In
Adv. Neural Information Processing Systems NIPS 23, vol. 23:658?666, 2011.
[12] M. D. McDonnell, N. G. Stocks. Maximally Informative Stimuli and Tuning Curves for Sigmoidal RateCoding Neurons and Populations. Phys Rev Lett., 101(5):058103, 2008.
[13] H Helmholtz. Treatise on Physiological Optics (transl.). Thoemmes Press, Bristol, U.K., 2000. Original
publication 1867.
[14] Y. Weiss, E. Simoncelli, and E. Adelson. Motion illusions as optimal percept. Nature Neuroscience,
5(6):598?604, June 2002.
[15] D.C. Knill and W. Richards, editors. Perception as Bayesian Inference. Cambridge University Press,
1996.
[16] A R Girshick, M S Landy, and E P Simoncelli. Cardinal rules: visual orientation perception reflects
knowledge of environmental statistics. Nat Neurosci, 14(7):926?932, Jul 2011.
[17] M. Jazayeri and M.N. Shadlen. Temporal context calibrates interval timing. Nature Neuroscience,
13(8):914?916, 2010.
[18] A.A. Stocker and E.P. Simoncelli. Noise characteristics and prior expectations in human visual speed
perception. Nature Neuroscience, pages 578?585, April 2006.
[19] H.B. Barlow. Possible principles underlying the transformation of sensory messages. In W.A. Rosenblith,
editor, Sensory Communication, pages 217?234. MIT Press, Cambridge, MA, 1961.
[20] D.M. Coppola, H.R. Purves, A.N. McCoy, and D. Purves The distribution of oriented contours in the real
world. Proc Natl Acad Sci U S A., 95(7): 4002?4006, 1998.
[21] N. Brunel and J.-P. Nadal. Mutual information, Fisher information and population coding. Neural Computation, 10, 7, 1731?1757, 1998.
[22] X-X. Wei and A.A. Stocker. Bayesian inference with efficient neural population codes. In Lecture Notes
in Computer Science, Artificial Neural Networks and Machine Learning - ICANN 2012, Lausanne, Switzerland, volume 7552, pages 523?530, 2012.
[23] A.A. Stocker and E.P. Simoncelli. Sensory adaptation within a Bayesian framework for perception. In
Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages
1291?1298. MIT Press, Cambridge, MA, 2006. Oral presentation.
[24] D.C. Knill. Robust cue integration: A Bayesian model and evidence from cue-conflict studies with stereoscopic and figure cues to slant. Journal of Vision, 7(7):1?24, 2007.
[25] Deep Ganguli. Efficient coding and Bayesian inference with neural populations. PhD thesis, Center for
Neural Science, New York University, New York, NY, September 2012.
[26] B. Fischer. Bayesian estimates from heterogeneous population codes. In Proc. IEEE Intl. Joint Conf. on
Neural Networks. IEEE, 2010.
9
| 4489 |@word crucially:1 minus:1 valois:1 n000141110744:1 tuned:8 interestingly:2 current:1 surprising:1 yet:4 must:1 additive:1 informative:1 shape:6 discrimination:1 v:2 cue:3 selected:1 ith:1 oblique:5 provides:6 sigmoidal:1 mathematical:1 along:1 direct:4 transl:1 qualitative:1 prove:1 combine:3 behavioral:2 upenn:1 expected:5 indeed:2 behavior:2 frequently:1 love:1 brain:2 freeman:1 tex:1 actual:1 window:1 increasing:2 becomes:2 estimating:1 underlying:2 notation:1 maximizes:1 matched:1 minimizes:1 monkey:2 nadal:1 finding:1 transformation:2 temporal:1 quantitative:1 platt:1 unit:2 originates:1 grant:1 appear:1 positive:1 engineering:1 local:5 timing:1 limit:1 consequence:1 acad:1 encoding:15 firing:8 modulation:1 approximately:2 might:3 black:3 discriminability:2 suggests:1 lausanne:1 blakemore:1 range:2 acknowledgment:1 illusion:4 area:4 empirical:1 cascade:1 thought:1 convenient:1 matching:1 gabor:1 specificity:1 suggest:2 get:5 convenience:1 marginalize:1 selection:1 context:2 applying:2 influence:1 cybern:1 kersten:1 www:1 map:1 demonstrated:2 missing:1 center:1 modifies:1 straightforward:1 estimator:7 rule:3 attraction:4 array:1 pull:2 population:27 spontaneous:1 homogeneous:10 astocker:1 hypothesis:7 jogan:1 pa:1 helmholtz:1 asymmetric:6 richards:1 predicts:2 observed:1 electrical:1 adv:1 counter:3 principled:2 intuition:1 environment:1 transforming:1 rose:1 constrains:2 oral:1 creates:1 basis:2 triangle:2 neurophysiol:2 multimodal:1 joint:2 stock:1 cat:5 various:1 derivation:2 artificial:1 choosing:1 neighborhood:1 quite:2 whose:2 encoded:1 larger:4 statistic:7 fischer:1 noisy:1 seemingly:1 hoc:1 propose:1 remainder:1 adaptation:2 ernst:1 description:3 intuitive:2 validate:1 olkopf:1 asymmetry:14 r1:2 intl:1 perfect:1 derive:1 depending:1 andrew:1 measured:8 eq:3 sa:1 predicted:7 judge:1 exhibiting:1 direction:1 switzerland:1 correct:1 human:7 leastsquares:1 swindale:1 lying:1 around:1 considered:1 ground:1 sufficiently:1 exp:1 mapping:6 predict:1 cognition:1 pointing:1 major:1 vary:1 perceived:7 estimation:8 proc:2 applicable:1 establishes:1 reflects:1 dreher:1 mit:2 always:4 gaussian:3 sight:1 rather:1 mccoy:1 broader:1 publication:1 encode:1 derived:1 june:1 consistently:1 modelling:1 likelihood:49 mainly:1 contrast:1 criticism:1 helpful:1 inference:10 ganguli:2 synonymous:1 repulsion:2 typically:4 explanatory:1 yund:1 expand:1 selective:1 tank:1 overall:4 orientation:52 proposes:1 constrained:4 integration:1 mutual:2 encouraged:1 identical:1 represents:3 biology:1 jones:1 adelson:1 alter:1 others:1 stimulus:31 simplify:2 cardinal:11 oriented:1 homogeneity:1 individual:2 hepler:1 attempt:2 message:1 homo:1 unjustified:1 wellstudied:1 extreme:1 natl:1 stocker:4 closer:1 taylor:1 re:4 plotted:1 circle:1 girshick:1 theoretical:1 jazayeri:1 modeling:1 uniform:2 r25:1 successful:2 optimally:3 reported:8 coppola:1 xue:1 referring:1 density:10 fundamental:1 peak:16 geisler:1 systematic:1 decoding:9 thesis:1 squared:3 reflect:1 von:2 central:1 solomon:1 tile:1 dr:3 external:13 creating:1 convolving:2 derivative:1 leading:2 conf:1 li:1 account:4 de:1 coding:14 includes:1 caused:1 depends:1 root:1 view:1 observer:2 lab:1 linked:1 red:3 start:1 decaying:1 bayes:4 elicited:1 purves:2 jul:1 contribution:1 minimize:1 square:3 emulates:1 characteristic:15 percept:2 efficiently:1 bayesian:36 accurately:1 none:1 worth:1 bristol:1 phys:1 rosenblith:1 definition:2 failure:1 against:1 involved:1 naturally:1 mi:2 adjusting:1 knowledge:2 manuscript:1 follow:2 response:9 wei:4 maximally:1 april:1 though:1 implicit:1 r23:1 until:1 replacing:1 defines:2 effect:3 requiring:1 concept:1 true:2 normalized:2 barlow:1 hence:1 symmetric:11 illustrated:2 attractive:1 sin:1 width:13 ambiguous:1 demonstrate:2 motion:3 lse:9 hallmark:1 image:4 novel:1 recently:1 fi:3 common:2 physical:5 exponentially:1 volume:1 tail:4 refer:2 cambridge:3 slant:1 tuning:40 unconstrained:1 henry:1 dot:1 treatise:1 cortex:10 posterior:3 selectivity:2 onr:1 exploited:1 impose:1 recognized:1 ii:1 full:1 simoncelli:5 reduces:1 alan:1 smooth:1 match:6 characterized:2 offer:1 long:1 post:1 prediction:10 heterogeneous:1 vision:3 expectation:1 poisson:1 df:3 kernel:1 cell:2 proposal:1 interval:2 leaving:1 modality:2 appropriately:1 biased:4 crucial:1 sch:1 comment:1 subject:1 hz:3 induced:2 odds:1 noting:1 constraining:1 iii:1 easy:1 enough:1 psychology:1 pennsylvania:1 idea:1 narrowly:1 expression:2 heavier:4 york:2 constitute:1 deep:1 generally:3 clear:1 transforms:1 http:1 generate:1 shifted:2 sign:3 neuroscience:4 stereoscopic:1 correctly:2 blue:2 broadly:1 write:3 vol:1 key:1 salient:1 threshold:1 changing:1 v1:5 inverse:6 uncertainty:2 place:1 patch:1 calibrates:1 encodingdecoding:2 activity:1 adapted:1 optic:1 constraint:6 constrain:2 ri:3 flat:1 noise2:1 speed:1 relatively:2 department:1 according:5 combination:2 mcdonnell:1 pink:1 smaller:2 increasingly:1 rev:1 intuitively:1 peaking:1 computationally:1 resource:1 agree:1 previously:1 describing:1 count:1 cor:1 end:1 repulsive:8 away:16 appropriate:2 distinguished:1 altogether:1 existence:1 original:1 assumes:1 top:1 landy:1 calculating:1 spike:1 parametric:1 primary:5 concentration:1 striate:1 traditional:2 exhibit:2 september:1 fovea:1 link:6 elicits:2 thank:1 sci:1 decoder:1 evenly:1 argue:1 toward:6 assuming:2 code:2 index:4 modeled:1 providing:1 ratio:2 difficult:1 potentially:1 claiming:1 implementation:2 upper:2 neuron:37 dispersion:1 anti:1 extended:1 variability:2 communication:1 rn:2 arbitrary:2 inverting:1 namely:1 conflict:1 learned:1 narrow:3 macaque:2 nip:1 suggested:2 perception:24 thoemmes:1 criticize:1 challenge:2 replotted:3 green:1 explanation:4 belief:5 natural:3 force:1 representing:1 scheme:2 philadelphia:1 prior:71 literature:1 relative:8 loss:2 lecture:1 proportional:1 allocation:1 proven:1 versus:2 consistent:1 imposes:1 shadlen:1 principle:3 editor:3 share:1 course:1 supported:1 last:1 free:1 bias:51 allow:1 side:5 wide:1 explaining:1 characterizing:1 peterson:1 absolute:9 curve:15 lett:1 world:4 cumulative:6 contour:2 sensory:18 commonly:1 simplified:1 far:1 approximate:4 preferred:6 status:1 monotonicity:1 deg:12 assumed:4 pretty:1 promising:1 nature:5 robust:1 inherently:1 symmetry:2 apr:1 main:1 icann:1 neurosci:1 motivation:2 noise:30 reconcile:1 knill:2 neuronal:1 fig:16 en:2 ny:1 inferring:1 lie:1 governed:1 perceptual:20 theorem:1 specific:4 bishop:1 showing:1 normative:3 r2:2 physiological:1 evidence:2 exists:1 importance:1 phd:1 magnitude:1 nat:1 illustrates:1 likely:1 forming:1 neurophysiological:1 visual:22 partially:1 monotonic:3 brunel:1 environmental:2 ma:2 modulate:1 presentation:1 towards:1 fisher:4 change:2 hard:1 specifically:2 determined:5 acting:1 total:2 xin:1 formally:3 internal:7 support:4 latter:1 modulated:1 violated:1 tested:2 biol:1 |
3,854 | 449 | Time-Warping Network:
A Hybrid Framework for Speech Recognition
Roberto Pieraccini
Esther Levin
Enrico Bocchieri
AT&T Bell Laboratories
Speech Research Department
Murray Hill, NJ 00974 USA
ABSTRACT
Recently. much interest has been generated regarding speech
recognition systems based on Hidden Markov Models (HMMs) and
neural network (NN) hybrids. Such systems attempt to combine the
best features of both models: the temporal structure of HMMs and
the discriminative power of neural networks. In this work we define
a time-warping (1W) neuron that extends the operation of the fonnal
neuron of a back-propagation network by warping the input pattern to
match it optimally to its weights. We show that a single-layer
network of TW neurons is equivalent to a Gaussian density HMMbased recognition system. and we propose to improve the
discriminative power of this system by using back-propagation
discriminative training. and/or by generalizing the structure of the
recognizer to a multi-layered net The performance of the proposed
network was evaluated on a highly confusable, isolated word. multi
speaker recognition task. The results indicate that not only does the
recognition performance improve. but the separation between classes
is enhanced also, allowing us to set up a rejection criterion to
improve the confidence of the system.
L INTRODUCTION
Since their first application in speech recognition systems in the late seventies, hidden
Markov models have been established as a most useful tool. mainly due to their ability
to handle the sequential dynamical nature of the speech signal. With the revival of
connectionism in the mid-eighties. considerable interest arose in applying artificial
neural networks for speech recognition. This interest was based on the discriminative
power of NNs and their ability to deal with non-explicit knowledge. These two
paradigms. namely HMM and NN. inspired by different philosophies. were seen at first
as different and competing tools. Recently. links have been established between these
two paradigms. aiming at a hybrid framework in which the advantages of the two
models can be combined. For example. Bourlard and Wellekens [1] showed that neural
151
152
Levin, Pieraccini, and Bocchieri
networks with proper architecture can be regarded as non-parametric models for
computing "discriminant probabilities" related to HMM. Bridle [2] introduced
"Alpha-nets", a recurrent neural architecture that implements the alpha computation of
HMM, and found connections between back-propagation [3] training and discriminative
HMM parameter estimation. Predictive neural nets were shown to have a statistical
interpretation [4], generalizing the conventional hidden Markov model by assuming
that the speech signal is generated by nonlinear dynamics contaminated by noise.
In this work we establish one more link between the two paradigms by introducing the
time-warping network (1WN) that is a generalization of both an HMM-based
recognizer and a back-propagation net. The basic element of such a network, a timewarping neuron, generalizes the function of a fonnal neuron by warping the input
signal in order maximize its activation. In the special case of network parameter
values, a single-layered network of time-warping (TW) neurons is equivalent to a
recognizer based on Gaussian HMMs. This equivalence of the HMM-based recognizer
and single-layer TWN suggests ways of using discriminative neural tools to enhance
the perfonnance of the recognizer. For instance, a training algorithm, like backpropagation, that minimizes a quantity related to the recognition performance, can be
used to train the recognizer instead of the standard non-discriminative maximum
likelihood training. Then, the architecture of the recognizer can be expanded to
contain more than one layer of units, enabling the network to fonn discriminant feature
detectors in the hidden layers.
This paper is organized as follows: in the first part of Section 2 we describe a simple
HMM-based recognizer. Then we define the time-warping neuron and show that a
single-layer network built with such neurons is equivalent to the HMM recognizer. In
Section 3 two methods are proposed to improve the discriminative power of the
recognizer, namely, adopting neural training algorithms and extending the structure of
the recognizer to a multi-layer net. For special cases of such multi-layer architecture
such net can implement a conventional or weighted [5] HMM recognizer. Results of
experiments using a TW network for recognition of the English E-set are presented in
Section 4. The results indicate that not only does the recognition performance
improve, but the separation between classes is enhanced also, allowing us to set up a
rejection criterion to improve the confidence of the system. A summary and discussion
of this work are included in Section 5.
ll. THE MODEL
In this section first we describe the basic HMM-based speech recognition system that
is used in many applications, including isolated and connected word recognition [6]
and large vocabulary subword-based recognition [7]. Though in this paper we treat the
case of isolated word recognition, generalization to connected speech can be made like
in [6,7]. In the second part of this section we define a single-layered time-warping
network and show that it is equivalent to the HMM based recognizer when certain
conditions constraining the network parameter values apply.
11.1 THE HIDDEN MARKOV MODEL?BASED RECOGNITION SYSTEM
A HMM-based recognition system consists of K N-state HMMs, where K is the
vocabul~ size (number of words or subword units in the defined task). The k-th
HMM, 0 , is associated with the k-th word in the vocabulary and is characterized by a
matrix AA:= (at} of transition probabilities between states,
at=Pr(St=j I St-l=i) ,0~i~N , l~j~N,
(1)
where St denotes the active state at time t (so =0 is a dummy initial state) and by a set
of emission probabilities (one per state):
Time-Warping Network: A Hybrid Framework for Speech Recognition
Pr(X, I s,=i)= ~21t
Illl:~ II 2
exp [- ~ (X,-J1~).
(l:~)-l (X,-J1~)]
, i =1, ... ,N,
(2)
where X, is the d-dimensional observation vector describing some parametric
representation of the t-th frame of the spoken token, and (). denotes the transpose
operation.
For the case discussed here, we concentrate on strictly left-to-right HMMs, where
at 0 only if j =i or j =i + 1, and a simplified case of (2) where all r} =I d, the
d=dimensional unit matrix.
*
The system recognizes a speech token of duration T, X={X~.x2'??? ,XT }, by
classifying the token into the class ko with the highest likelihood L O(X),
ko=argmaxLJ:(X).
(3)
ISk~
The likelihood L J:(X) is computed for the k-th HMM as
LJ:(X)=
max
{il. Of
=
0
max
{II'
0
0
0
0
0
,iTt
10g[Pr(X I of,Si=i 1 ,
???
,sT=i,)]
(4)
L -2 II X,-J.1t II 2+10gat14 -log21t.
,IT} 1=1
The state sequence that maximizes (4) is found by using the Viterbi [8] algorithm.
11.2
THE EQUIVALENT SINGLE-LAYER TIME-WARPING NETWORK
A single-layer TW network is composed of K TW neurons, one for each word in the
vocabulary. The TW neuron is an extension of a fonnal neuron that can handle
dynamic and temporally distorted patterns. The k-th TW neuron, associated with"tpe
k-t.I\ '!..Qfabulary" word, is charpcterized by a bias w~ and a set of weights. W =
{W 10 W2, .... ~) ? where Wi is a column vector of dimensionality d +2. Given an
input speech token of duration T. X={X ltX 2?... ,XT }, the output activation yJ: of the
k-th unit is computed as
T ".
::.k
N
".
.:.k
/=g( LX"w 4 +w~ )=g( L ( L X')'Wj+w~),
1=1
j=l , : 4=j
".
(5)
where g (-) is a sigmoidal, smooth, strictly increasing nonlinearity, and X, = [X; ? 1, 1]
is an d+2 - dimensional augmented input vector. The corresponding indices i,.
t=l, ... ,T are detennined by the following condition:
T ". "J:
fi10 ... ?iT } =argmax LX,
'W4 +w~ .
,=1
(6)
In other words. a TW neuron warps the input pattern to match it optimally to its
weights (6) and computes its output using this warped version of the input (5). The
time-warping process of (6) is a distinguishing feature of this neural model, enabling it
to deal with the dynamic nature of a speech signal and to handle temporal distortions.
All TW neurons in this single-layer net recognizer receive the same input speech token
X. Recognition is perfonned by selecting the word class corresponding to the neuron
with the maximal output activation.
It is easy to show that when
::.k ?
k.
1
[W
j] = [[Jlj]
? -"2
and
II Jljk II 2 .logaj,j
k
],
(7a)
153
154
Levin, Pieraccini, and Bocchieri
N
w~ = L loga',j_1 -loga',j
(7b)
j=1
this network is equivalent to an HMM-based recognition system, with K N-state
HMMs, as described above. l
This equivalent neural representation of an HMM-based system suggests ways of
improving the discriminative power of the recognizer, while preserving the temporal
structure of the HMM, thus allowing generalization to more complicated tasks (e.g.,
continuous speech, subword units, etc.).
III. IMPROVING DISCRIMINATION
There are two important differences between the HMM-based ~ystem and a neural net
approach to speech recognition that contribute to the improved discrimination power of
the latter, namely, training and structure.
ID.I DISCRIMINATIVE TRAINING
The HMM parameters are usually estimated by applying the maximum likelihood
approach, using only the examples of the word represented by the model and
disregarding the rival classes completely. This is a non-discriminative approach: the
learning criterion is not directly connected to the improvement of recognition accuracy.
Here we propose to enhance the discriminative power of the system by adopting a
neural training approach.
NN training algorithms are based on minimizing an error function E. which is related
to the performance of the network on the training set of labeled examples, {X I , Z '},
1=1, ... ,L, where Z'=[z{, ... ,zkl* denotes the vector of target neural outputs for the
I-th input token. Zl has +1 only in the entry corresponding to the right word class, and
-1 elsewhere. Then,
L
E =LE'(Z', yl),
(8)
'=1
where yl = [Y,i, ... ,ykt is a vector of neural output activations for the I-th input
token, and E'(Z', y') measures the distortion between the two vectofi' One choice for
E'(Z', yl) is a quadratic error measure, i.e., E'(Z', y')= II Z'_yl 2. Other choices
include the cross-entropy error [9] and the recently proposed discriminative error
functions, which measure the misciassification rate more directly [10].
The gradient based training algorithms (such as back-propagation) modify the
parameters of the network after presentation of each training token to minimize the
error (8). The change in the j-th weight subvector of the k-th model after presentation
of the I-th training token, ~IW' is inversely proportional to the derivative of the error
E' with respect to this weight subvector,
dE'
K dE' dyl
~'W'=-(l--j; = - ( l L -, ~, l~jgj ,1~g,
(9)
dWj
m=1 dy m aWj
where a> 0 is a step-size, resulting in an updated weight vectpr
.=-.I: ?
I:
1:.
1 II I:
I: II 2
I:
ay m
[Wj] = [[ Wj +~Wi] ,-"'2 Wi +~Wi
' logaj,j]' To compute the terms d~
J
1. With minor changes we can show equivalence to a general Gaussian HMM, where the covariance
matrices are not restricted to be the unit matrix.
Time-Warping Network: A Hybrid Framework for Speech Recognition
W,.
we have to consider (5) and (6) that define the operation of the neuron. Equation (6)
expresses the dependence of the warping indices iI, ... ,iT on
In the proposed
learning rule we compute the gradient for the quadratic error criterion using only (5).
A'W'=a(zi-Yi)g'(?)
L
X~-W' '
(10)
, :i,=j
where the values of it fulfill condition (6). Although the weights do not change
according to the exact gradient descent rule (since (6) is not taken into account for
back-propagation) we found experimentally that the error made by the network always
decreases after the weight update. This fact also can be proved when certain
conditions restricting the step-size a hold, and we conjecture that it is always true for
a>O.
m.2 THE STRUCTURE OF THE RECOGNIZER
When the equivalent neural representation of the HMM-based recognizer is used, there
exists a natural way of adaptively increasing the complexity of the decision boundaries
and developing discriminative feature detectors. This can be done by extending the
structure of the recognizer to a multi-layered neL There are many possible
architectures that result from such an extension by changing the number of hidden
layers, as well as the number and the type (Le., standard or TW ) of neurons in the
hidden layers. Moreover, the role of the TW neurons in the first hidden layer is
different now: they are no longer class representatives, as in a single-layered net, but
just abstract computing elements with built-in time scale nonnalization. In this work
we investigate only a simple special case of such multi-layered architecture. The
multi-layered network we use has a single hidden layer, with NxK TW neurons. Each
hidden neuron corresponds to oQ~ state of one of the original HM:Ms, and is
charac~riz~ by a weight vector Wj and a bias
The output activation hj of the
neuron IS gIven as
w,.
(11)
where
and
N
{ i It
???
,iT} = argmax
L
j=1
ur
The output layer is composed of K standard neurons. The activation of output neurons
yi, k=I, ... , K, is detennined by the hidden layer neurons activations as
yi=g(H* Vi + Vi),
(12)
where Vi is a NxK dimensional weight vector, H is the vector of hidden neurons
activation, and Vi is a bias tenn.
In a special case of parameter values, when ~ satisfy the conditions (7a,b) and
w-k =Ioga-k --I - Ioga-i J
j,J
j,J '
(13)
the activation hj corresponds to an accumulated j-th state likelihood of the k-th HMM:
and the network implements a weighted [5] HMM recognizer where the connection
weight vectors Vi detennine the relative weights assigned to each state likelihood in
the final classification. Such network can learn to adopt these weights to enhance
discrimination by giving large positive weights to states that contain infonnation
important for discrimination and ignoring (by fonning zero or close to zero weights)
those states that do not contribute to discrimination. A back-propagation algorithm
155
156
Levin, Pieraccini, and Bocchieri
can be used for training this net.
IV. EXPERIMENTAL RESULTS
To evaluate the effectiveness of the proposed TWN, we conducted several experiments
that involved recognition of the highly confusable English E-set (i.e., Ib, c, d, e, g, p, t,
v, z/). The utterances were collected from 100 speakers, 50 males and 50 females,
each speaking every word in the E-set twice, once for training and once for testing.
The signal was sampled at 6.67 kHz. We used 12 cepstral and 12 delra-cepstral LPCderived [11] coefficients to represent each 45 msec frame of the sampled signal.
We used a baseline conventional HMM-based recognizer to initialize the TW network,
and to get a benchmark performance. Each strictly left-to-right HMM in this system
has five states, and the observation densities are modeled by four Gaussian mixture
components. The recognition rates of this system are 61.7% on the test data, and
80.2% on the training data.
Experiment with single-layer TWN: In this experiment the single-layer TW network
was initialized according to (7), using the parameters of the baseline HMMs. The four
mixture components of each state were treated as a fully connected set of four states,
with transition probabilities that reflect the original transition probabilities and the
relati ve weights of the mixtures. This corresponds to the case in which the local
likelihood is computed using the dominant mixture component only. The network was
trained using the suggested training algorithm (10), with quadratic error function. The
recognition rate of the trained network increased to 69.4% on the test set and 93.6% on
the training sel
Experiment with multi-layer TWN: In this experiment we used the multi-layer
network architecture described in the previous section. The recognition perfonnance of
this network after training was 74.4% on the test set and 91 % on the training set.
Figures I, 2, and 3 show the recognition performance of a single-layer lWN,
initialized by a baseline HMM. the trained single-layer TWN. and the trained multilayer TWN, respectively. In these figures the activation of the unit representing the
correct class is plotted against the activation of the best wrong unit (Le., the incorrect
class with the highest score) for each input utterance. Therefore, the utterances that
correspond to the marks above the diagonal line are correctly recognized, and those
under it are misclassified. The most interesting observation that can be made from
these plots is the striking difference between the multi-layer and the single-layer
TWNs. The single-layer lWNs in Figures 1 and 2 (the baseline and the trained)
exhibit the same typical behavior when the utterances are concentrated around the
diagonal line. For the multi-layer net, the utterances that were recognized correctly tend
to concentrate in the upper part of the graph, having the correct unit activation close to
1.0. This property of a multi-layer net can be used for introducing error rejection
criterions: utterances for which the difference between the highest activation and
second high activation is less than a prescribed threshold are rejected. In Figure 4 we
compare the test performance of the multi-layer net and the baseline system, both with
such rejection mechanism. for different values of rejection threshold. As expected. the
multi-layer net outperforms the baseline recognizer, by showing much smaller
misclassification rate for the same number of rejections.
V. SUMMARY AND DISCUSSION
In this paper we established a hybrid framework for speech recognition, combining the
characteristics of hidden Markov models and neural networks. We showed that a
HMM-based recognizer has an equivalent representation as a single-layer network
composed of time-warping neurons, and proposed to improve the discriminative power
of the recognizer by using back-propagation training and by generalizing the structure
of the recognizer to a multi-layer net. Several experiments were conducted for testing
Time-Warping Network: A Hybrid Framework for Speech Recognition
the perfonnance of the proposed network on a highly confusable vocabulary (the
English E-set). The recognition perfonnance on the test set of a single-layer TW net
improved from 61% (when initialized with a baseline HMMs) to 69% after training.
Expending the structure of the recognizer by one more layer of neurons, we obtained
further improvement of recognition accuracy up to 74.4%. Scatter plots of the results
indicate that in the multi-layer case, there is a qualitative change in the perfonnance of
the recognizer, allowing us to set up a rejection criterion to improve the confidence of
the system.
Rererences
1. H. Bourlard, CJ. Wellekens, "Links between Markov models and multilayer
perceptrons," Advances in Neural Information Processing Systems. pp.502-510,
Morgan Kauffman, 1989.
2. J.S. Bridle, "Alphanets: a recurrent 'neural' network architecture with a hidden
NIarkov model interpretation," Speech Communication, April 1990.
3. D.E. Rumelhart, G.E. Hinton and RJ. Williams, "Learning internal representation
by error propagation," Parallel Distributed Processing: Exploration in the
Microstructure of Cognition. MIT Press. 1986.
4. E. Levin. "Word recognition using hidden control neural architecture," Proc. of
ICASSP. Albuquerque, April 1990.
5. K.-Y. Suo C.-H. Lee. "Speech Recognition Using Weighted HMM and Subspace
Projection Approaches," Proc of ICASSP. Toronto, 1991.
6. L. R. Rabiner, "A tutorial on hidden Markov models and selected applications in
speech recognition," Proc. of IEEE. vol. 77, No.2, pp. 257-286, February 1989.
7. C.-H. Lee, L. R. Rabiner, R. Pieraccini, J. G. Wilpon, "Acoustic Modeling for Large
Vocabulary Speech Recognition," Computer Speech and Language, 1990. No.4. pp.
127-165.
8. G.D. Forney, "The Viterbi algorithm," Proc. IEEE. vol. 61, pp. 268-278, 1-tar.
1973.
9. S.A. Solta, E. Levin, M. Fleisher. "Improved targets for multilayer perceptron
learning." Neural Networks Journal. 1988.
10. B.-H. Juang, S. Katagiri, "Discriminative Learning for :Minimum Error
Classification," IEEE Trans. on SP, to be published.
11. B.S. Atal, "Effectiveness of linear prediction characteristics of the speech wave for
automatic speaker identification and verification," J. Acoust. Soc. Am., vol. 55, No.6,
pp. 1304-1312, June 1974.
Figure 1: Scatter plot for baseline recognizer
157
158
Levin, Pieraccini, and Bocchieri
??
Q
??
r
?-.
r
?-.
:
-.1
-.
-.
-.
-.
-.
?
.... e. ......
Figure 2: Scatter plot for trained single-layer 1WN
Figure 3: Scaner plot for multi-layer 1WN
I
I
j
~
j
J4
14"ni.
JOi..
a?.
~~
::t,
rr '."
z
.
:
I~
,,,..l
~.
~
III
~
~_-'
~-ry - iCl.
"
~
S
,ot
~
...:i.?,
~
'1
I" t:. ..
.J
~.
.0
Ir"J ... tM
W
I
, ~I
to
H~I
I
Figure 4: Rejection perfonnance of baseline recognizer and the multi-layer nvN
| 449 |@word version:1 covariance:1 fonn:1 initial:1 score:1 selecting:1 subword:3 outperforms:1 activation:14 si:1 scatter:3 j1:2 plot:5 update:1 discrimination:5 tenn:1 selected:1 contribute:2 toronto:1 lx:2 sigmoidal:1 jgj:1 five:1 incorrect:1 consists:1 qualitative:1 combine:1 expected:1 behavior:1 bocchieri:5 multi:18 ry:1 inspired:1 increasing:2 moreover:1 maximizes:1 minimizes:1 spoken:1 acoust:1 nj:1 temporal:3 every:1 wrong:1 zl:1 unit:9 control:1 positive:1 local:1 treat:1 modify:1 aiming:1 id:1 twice:1 equivalence:2 suggests:2 hmms:8 yj:1 testing:2 implement:3 backpropagation:1 illl:1 w4:1 bell:1 projection:1 word:13 confidence:3 nonnalization:1 get:1 close:2 layered:7 applying:2 equivalent:9 conventional:3 williams:1 duration:2 rule:2 regarded:1 handle:3 updated:1 enhanced:2 target:2 exact:1 distinguishing:1 pieraccini:6 element:2 jlj:1 recognition:35 rumelhart:1 labeled:1 role:1 fleisher:1 wj:4 revival:1 connected:4 decrease:1 highest:3 complexity:1 dynamic:3 trained:6 predictive:1 completely:1 dyl:1 icassp:2 represented:1 train:1 describe:2 artificial:1 distortion:2 ability:2 final:1 advantage:1 sequence:1 rr:1 net:16 propose:2 maximal:1 combining:1 detennined:2 juang:1 extending:2 recurrent:2 minor:1 soc:1 indicate:3 concentrate:2 correct:2 exploration:1 microstructure:1 generalization:3 connectionism:1 strictly:3 extension:2 hold:1 around:1 exp:1 viterbi:2 cognition:1 joi:1 adopt:1 recognizer:28 estimation:1 proc:4 iw:1 awj:1 infonnation:1 tool:3 weighted:3 mit:1 gaussian:4 always:2 fulfill:1 arose:1 hj:2 sel:1 tar:1 emission:1 june:1 improvement:2 likelihood:7 mainly:1 baseline:9 am:1 esther:1 nn:3 accumulated:1 lj:1 hidden:16 nxk:2 misclassified:1 classification:2 special:4 initialize:1 once:2 having:1 seventy:1 contaminated:1 eighty:1 composed:3 ve:1 argmax:2 attempt:1 interest:3 highly:3 investigate:1 male:1 mixture:4 perfonnance:6 iv:1 initialized:3 confusable:3 plotted:1 isolated:3 instance:1 column:1 increased:1 modeling:1 introducing:2 entry:1 levin:7 conducted:2 optimally:2 nns:1 combined:1 st:4 density:2 adaptively:1 lee:2 yl:4 enhance:3 reflect:1 warped:1 derivative:1 account:1 de:2 coefficient:1 satisfy:1 vi:5 wave:1 complicated:1 parallel:1 minimize:1 il:1 ir:1 accuracy:2 ni:1 characteristic:2 correspond:1 rabiner:2 identification:1 albuquerque:1 published:1 detector:2 against:1 pp:5 involved:1 associated:2 bridle:2 sampled:2 proved:1 icl:1 knowledge:1 dimensionality:1 organized:1 cj:1 back:8 improved:3 april:2 evaluated:1 though:1 done:1 just:1 rejected:1 nonlinear:1 propagation:9 usa:1 contain:2 true:1 assigned:1 laboratory:1 deal:2 ll:1 speaker:3 criterion:6 m:1 hill:1 ay:1 recently:3 khz:1 discussed:1 interpretation:2 suo:1 automatic:1 nonlinearity:1 wilpon:1 language:1 katagiri:1 zkl:1 j4:1 longer:1 etc:1 dominant:1 showed:2 female:1 loga:2 certain:2 yi:3 seen:1 preserving:1 morgan:1 minimum:1 recognized:2 paradigm:3 maximize:1 signal:6 ii:10 rj:1 expending:1 smooth:1 match:2 characterized:1 cross:1 prediction:1 basic:2 ko:2 multilayer:3 represent:1 adopting:2 receive:1 enrico:1 w2:1 ot:1 tend:1 oq:1 effectiveness:2 constraining:1 iii:2 easy:1 wn:3 zi:1 architecture:9 competing:1 regarding:1 tm:1 speech:25 speaking:1 useful:1 mid:1 rival:1 nel:1 concentrated:1 tutorial:1 estimated:1 dummy:1 per:1 correctly:2 vol:3 express:1 four:3 threshold:2 changing:1 graph:1 distorted:1 striking:1 extends:1 separation:2 decision:1 dy:1 forney:1 layer:37 quadratic:3 x2:1 prescribed:1 ltx:1 expanded:1 conjecture:1 department:1 developing:1 according:2 smaller:1 ur:1 wi:4 tw:15 restricted:1 pr:3 taken:1 equation:1 wellekens:2 describing:1 mechanism:1 generalizes:1 operation:3 detennine:1 apply:1 original:2 denotes:3 include:1 recognizes:1 relati:1 giving:1 murray:1 fonnal:3 establish:1 february:1 warping:15 quantity:1 parametric:2 dependence:1 diagonal:2 exhibit:1 gradient:3 subspace:1 link:3 hmm:28 dwj:1 collected:1 discriminant:2 assuming:1 index:2 modeled:1 minimizing:1 charac:1 j_1:1 proper:1 allowing:4 upper:1 neuron:27 observation:3 markov:7 benchmark:1 enabling:2 descent:1 hinton:1 communication:1 frame:2 timewarping:1 fonning:1 introduced:1 namely:3 subvector:2 connection:2 acoustic:1 established:3 trans:1 suggested:1 dynamical:1 pattern:3 usually:1 kauffman:1 built:2 including:1 max:2 power:8 perfonned:1 misclassification:1 natural:1 hybrid:7 treated:1 bourlard:2 representing:1 improve:8 inversely:1 temporally:1 hm:1 utterance:6 roberto:1 relative:1 fully:1 interesting:1 proportional:1 verification:1 classifying:1 elsewhere:1 summary:2 token:9 transpose:1 english:3 bias:3 warp:1 perceptron:1 cepstral:2 distributed:1 boundary:1 vocabulary:5 transition:3 hmmbased:1 computes:1 made:3 simplified:1 alpha:2 active:1 discriminative:16 continuous:1 nature:2 itt:1 learn:1 ignoring:1 improving:2 nvn:1 sp:1 noise:1 alphanets:1 augmented:1 representative:1 explicit:1 msec:1 ib:1 late:1 atal:1 xt:2 showing:1 disregarding:1 exists:1 restricting:1 sequential:1 rejection:8 entropy:1 generalizing:3 ykt:1 aa:1 corresponds:3 presentation:2 considerable:1 change:4 experimentally:1 included:1 typical:1 experimental:1 perceptrons:1 rererences:1 tpe:1 mark:1 internal:1 latter:1 philosophy:1 evaluate:1 isk:1 |
3,855 | 4,490 | Learning from the Wisdom of Crowds by Minimax
Entropy
Dengyong Zhou, John C. Platt, Sumit Basu, and Yi Mao
Microsoft Research
1 Microsoft Way, Redmond, WA 98052
{denzho,jplatt,sumitb,yimao}@microsoft.com
Abstract
An important way to make large training sets is to gather noisy labels from crowds
of nonexperts. We propose a minimax entropy principle to improve the quality
of these labels. Our method assumes that labels are generated by a probability
distribution over workers, items, and labels. By maximizing the entropy of this
distribution, the method naturally infers item confusability and worker expertise.
We infer the ground truth by minimizing the entropy of this distribution, which we
show minimizes the Kullback-Leibler (KL) divergence between the probability
distribution and the unknown truth. We show that a simple coordinate descent
scheme can optimize minimax entropy. Empirically, our results are substantially
better than previously published methods for the same problem.
1
Introduction
There is an increasing interest in using crowdsourcing to collect labels for machine learning [19,
6, 21, 17, 20, 10, 13, 12]. Currently, many companies provide crowdsourcing services. Amazon
Mechanical Turk (MTurk) [2] and CrowdFlower [4] are perhaps the most well-known ones. An
advantage of crowdsourcing is that we can obtain a large number of labels at the low cost of pennies
per label. However, these workers are not experts, so the labels collected from them are often fairly
noisy. A fundamental challenge in crowdsourcing is inferring ground truth from noisy labels by a
crowd of nonexperts.
When each item is labeled several times by different workers, a straightforward approach is to use
the most common label as the true label. From reported experimental results on real crowdsourcing
data [19] and our own experience, majority voting performs significantly better on average than
individual workers. However, majority voting considers each item independently. When many items
are simultaneously labeled, it is reasonable to assume that the performance of a worker is consistent
across different items. This assumption underlies the work of Dawid and Skene [5, 18, 19, 11, 17],
where each worker is associated with a probabilistic confusion matrix that generates her labels. Each
entry of the matrix indicates the probability that items in one class are labeled as another. Given the
observed responses, the true labels for each item and the confusion matrices for each worker can be
jointly estimated by a maximum likelihood method. The optimization can be implemented by the
expectation-maximization (EM) algorithm [7].
Dawid and Skene?s method works well in practice. However, their method only contains a perworker probabilistic confusion model of generating labels. In this paper, we assume a separate
probabilistic distribution for each worker-item pair. We propose a novel minimax entropy principle
to jointly estimate the distributions and the ground truth given the observed labels by workers in
Section 2. The theoretical justification of minimum entropy is given in Section 2.1. To prevent overfitting, we relax the minimax entropy optimization in Section 3. We describe an easy-to-implement
technique to carry out the minimax program in Section 4 and link minimax entropy to a principle of
1
item 1
z11
z21
...
zm1
worker 1
worker 2
...
worker m
item 2
z12
z22
...
zm2
item n
z1n
z2n
...
zmn
...
...
...
...
...
worker 1
worker 2
...
worker m
item 1
?11
?21
...
?m1
item 2
?12
?22
...
?m2
...
...
...
...
...
item n
?1n
?2n
...
?mn
Figure 1: Left: observed labels. Right: underlying distributions. Highlights on both tables indicate
that rows and columns of the distributions are constrained by sums over observations.
objective measurements in Section 5. Finally, we present superior experimental results on real-world
crowdsourcing data in Section 6.
2
Minimax Entropy Principle
We propose a model illustrated in Figure 1. Each row corresponds to a crowdsourced worker indexed
by i (from 1 to m). Each column corresponds to an item to be labeled, indexed by j (from 1 to n).
Each item has an unobserved label represented as a vector yjl , which is 1 when item j is in class l
(from 1 to c), and 0 otherwise. More generally, we can treat yjl as the probability that item j is in
class l. We observe a matrix of labels zij by workers. The label matrix can also be represented as a
tensor zijk , which is 1 when worker i labels item j as class k , and 0 otherwise. We assume that zij
are drawn from ?ij , which is the distribution for worker i to generate a label for item j. Again, ?ij
can also be represented as a tensor ?ijk , which is the probability that worker i labels item j as class
k. Our method will estimate yjl from the observed zij .
We specify the form of ?ij through the maximum entropy principle, where the constraints on the
maximum entropy combine the best ideas from previous work. Majority voting suggests that we
should be constraining
with the empirical observation of the number of votes
P the ?ij per column,P
per class per item i zijk should match i ?ijk . Dawid and Skene?s method suggests
P that we
should be constraining the ?ij per row, with the empirical confusion matrix per worker j yjl zijk
P
should match j yjl ?ijk . We thus have the following maximum entropy model for ?ij given yjl :
?
max
?
m X
n X
c
X
?ijk ln ?ijk
i=1 j=1 k=1
m
m
X
X
?ijk =
s.t.
i=1
c
X
zijk , ?j, k,
i=1
n
X
yjl ?ijk =
j=1
n
X
yjl zijk , ?i, k, l,
(1a)
j=1
?ijk = 1, ?i, j, ?ijk ? 0, ?i, j, k.
(1b)
k=1
We propose that, to infer yjl , we should choose yjl to minimize the entropy in Equation (1). Intuitively, making ?ij ?peaky? means that zij is the least random given yjl . We make this intuition
rigorous in Section 2.1. Thus, the inference for yjl can be expressed by a minimax entropy program:
min max
y
?
s.t.
?
m X
n X
c
X
?ijk ln ?ijk
i=1 j=1 k=1
m
m
X
X
?ijk =
i=1
c
X
i=1
zijk , ?j, k,
n
X
yjl ?ijk =
j=1
?ijk = 1, ?i, j, ?ijk ? 0, ?i, j, k,
k=1
n
X
c
X
l=1
2
yjl zijk , ?i, k, l,
(2a)
j=1
yjl = 1, ?j, yjl ? 0, ?j, l.
(2b)
2.1
Justification for Minimum Entropy
Now we justify the principle of choosing yjl by minimizing entropy. Think of yjl as a set of parameters to the worker-item label models ?ij . The goal in choosing the yjl is to select ?ij that are as
?
close as possible to the true distributions ?ij
.
To find a principle to choose the yjl , assume that we have access to the row and column measure?
ments on the true distributions
P ? ?ij . That is, assume that we know
P the ?true values of the column
measurements ?jk = i ?ijk and row measurements ?ikl = j yjl ?ijk
, for a chosen set of yjl
values. Knowing these true row and column measurements, we can apply the maximum entropy
principle to generate distributions ?ij :
max
?
?
m X
n X
c
X
?ijk ln ?ijk
i=1 j=1 k=1
s.t.
m
X
n
X
?ijk = ?jk , ?j, k,
i=1
(3)
yjl ?ijk = ?ikl , ?i, k, l.
j=1
Let DKL (? k ?) denote the KL divergence between two distributions. We can choose yjl to minimize
?
a loss of ?ij with respect to ?ij
given by
`(? ? , ?) =
n
m X
X
?
DKL (?ij
k ?ij ).
(4)
i=1 j=1
The minimum loss can be attained by choosing yjl to minimize the entropy of the maximum distributions ?ij . This can be shown by writing the Lagrangian of program (3):
X
m X
n X
c
m X
n
c
X
X
L=?
?ijk ln ?ijk +
?ij
?ijk ? 1
i=1 j=1 k=1
n X
c
X
m
X
j=1 k=1
i=1
+
i=1 j=1
?jk
?
(?ijk ? ?ijk
)+
k=1
m
c
c
XXX
i=1 k=1 l=1
?ikl
n
X
?
yjl (?ijk ? ?ijk
),
j=1
where the newly introduced variables ?jk and ?ikl are the Lagrange multipliers. For a solution to be
optimal, the Karush-Kuhn-Tucker (KKT) conditions must be satisfied [3]. Thus,
c
X
?L
yjl (?jk + ?ikl ) = 0, ?i, j, k,
= ? ln ?ijk ? 1 + ?ij +
??ijk
l=1
which can be rearranged as
?ijk = exp
X
c
yjl (?jk + ?ikl ) + ?ij ? 1 , ?i, j, k.
(5)
l=1
For being a probability measure, the variables ?ijk have to satisfy
X
c
c
c
X
X
?ijk =
exp
yjl (?jk + ?ikl ) + ?ij ? 1 = 1, ?i, j.
k=1
k=1
(6)
l=1
Eliminating ?ij by jointly considering Equations (5) and (6), we obtain a labeling model in the
exponential family:
Pc
exp l=1 yjl (?jk + ?ikl )
Pc
, ?i, j, k.
(7)
?ijk = Pc
s=1 exp
l=1 yjl (?js + ?isl )
Plugging Equation (7) into (4) and performing some algebraic manipulations, we prove
Theorem 2.1 Let ?ij be the maximum entropy distributions in (3). Then,
`(? ? , ?) =
m X
n X
c
X
?
?
(?ijk
ln ?ijk
? ?ijk ln ?ijk ).
i=1 j=1 k=1
3
The second term is the only term that depends on yjl . Therefore, we should choose yjl to minimize
the entropy of the maximum entropy distributions.
The labeling model expressed by Equation (7) has a natural interpretation. For each worker i, the
multiplier set {?ikl } is a measure of her expertise, while for each item j, the multiplier set {?jk } is a
measure of its confusability. A worker correctly labels an item either because she has good expertise
or because the item is not that confusing. When the item or worker parameters are shifted by an
arbitrary constant, the probability given by Equation (7) does not change. The redundancy of the
constraints in (2a) causes the redundancy of the parameters.
3
Constraint Relaxation
In real crowdsourcing applications, each item is usually labeled only a few times. Moreover, a
worker usually only labels a small subset of items rather than all of them. In such cases, it is
unreasonable to expect that the constraints in (2a) hold for the true underlying distributions ?ij . As
in the literature of regularized maximum entropy [14, 1, 9], we relax the optimization problem to
prevent overfitting:
min max
y
?,?,?
s.t.
?
n X
c
m X
X
?ijk ln ?ijk ?
i=1 j=1 k=1
c
n X
c X
c
m X
2
2
X
X
?jk
?ikl
?
2?j
2?i
j=1
i=1
k=1
k=1 l=1
n
m
X
X
yjl (?ijk ? zijk ) = ?ikl , ?i, k, l,
(?ijk ? zijk ) = ?jk , ?j, k,
i=1
c
X
(8a)
j=1
?ijk = 1, ?i, j, ?ijk ? 0, ?i, j, k,
c
X
yjl = 1, ?j, yjl ? 0, ?j, l,
(8b)
l=1
k=1
where ?j and ?i are regularization parameters. It is obvious that program (8) is reduced to program
(2) when the slack variables ?jk and ?ikl are set to zero. The two `2 -norm based regularization terms
in the objective function force the slack variables to be not far away from zero. Other vector or
matrix norms, such as the `1 -norm and the trace norm, can be applied as well [14, 1, 9]. We choose
the `2 -norm only for the sake of simplicity in computation.
The justification for minimum entropy in Section 2.1 can be extended to the regularized minimax
entropy formulation (8) with minor modifications. Instead of knowing the exact marginals, we need
to choose ?ij based on noisy marginals:
?jk =
m
X
?
?
?ijk
+ ?jk
, ?j, k, ?ikl =
i=1
n
X
?
?
yjl ?ijk
+ ?ikl
, ?i, k, l.
j=1
We thus maximize the regularized entropy subject to the relaxed constraints:
m
X
?ijk + ?jk = ?jk , ?j, k,
i=1
n
X
yjl ?ijk + ?ikl = ?ikl , ?i, k, l.
(9)
j=1
Lemma 3.1 To be the regularized maximum entropy distributions subject to (9), ?ij must be represented as in Equation (7). Moreover, we should have ?jk = ?j ?jk , ?ikl = ?i ?ikl .
Proof The first part of the result can be verified as before. By using the labeling model in Equation
(7), the Lagrangian of the regularized maximum entropy program can be written as
X
X
m X
n
c
c
n X
c
m X
c X
c
2
2
X
X
X
?jk
?ikl
L=?
ln
exp
yjl (?js + ?isl ) ?
?
2?j
2?i
s=1
i=1 j=1
j=1 k=1
i=1 k=1 l=1
l=1
n X
c
m
m X
c X
c
n
X
X
X
X
?
?
?
?
+
?jk ?
?ijk + (?jk ? ?jk ) +
?ikl ?
yjl ?ijk + (?ikl ? ?ikl ) .
j=1 k=1
i=1
i=1 k=1 l=1
j=1
For fixed ?jk and ?ikl , maximizing the Lagrange dual over ?jk and ?ikl provides the proof.
4
By Lemma 3.1 and some algebraic manipulations, we obtain
Theorem 3.2 Let ?ij be the regularized maximum entropy distributions subject to (9). Then,
`(? ? , ?) =
m X
n X
c
X
?
?
?ijk
ln ?ijk
?
i=1 j=1 k=1
m X
c X
c
X
?
i=1 k=1 l=1
m X
n X
c
X
i=1 j=1 k=1
n X
c
?
X
?jk
?jk
2
?ikl
+
?i
j=1
k=1
?j
?ijk ln ?ijk ?
+
n X
c
2
X
?jk
j=1 k=1
m X
c X
c
X
i=1 k=1 l=1
?j
?
?ikl
?ikl
.
?i
(10)
We cannot minimize the loss by minimizing the right side of Equation (10) since the random noise
is unknown. However, we can consider minimizing an upper bound instead. Note that
?
2
?jk
?jk ? (? ? 2jk + ?jk
)/2, ?j, k,
?
2
?ikl
?ikl ? (? ? 2ikl + ?ikl
)/2, ?i, k, l.
(11)
Denote by ?(?, ?, ?) the objective function of the regularized minimax entropy program (8). Substituting the inequalities in (11) into Equation (10), we have
`(? ? , ?) ? ?(?, ?, ?) ? ?(? ? , ? ? , ? ? ).
(12)
So minimizing the regularized maximum entropy leads to minimizing an upper bound of the loss.
4
Optimization Algorithm
A typical approach to constrained optimization is to covert the primal problem to its dual form. By
Lemma 3.1, the Lagrangian of program (8) can be written as
Pc
Pc
Y
X
m
n
m X
c X
c
n X
c
2
2
X
X
?j ?jk
exp k=1 zijk l=1 yjl (?jk + ?ikl )
?i ?ikl
Pc
Pc
L=?
ln
+
.
+
2
2
s=1 exp
l=1 yjl (?js + ?isl )
i=1
j=1
i=1
j=1
k=1 l=1
k=1
Pc
The dual problem minimizes L subject to the constraints ? = {yjl | l=1 yjl = 1, ?j, yjl ?
0, ?j, l}. It can be solved by coordinate descent with the variables being split into two groups: {yjl }
and {?jk , ?ikl }. It is easy to check that, when the variables in one group are fixed, the optimization
problem on the variables in the other group is convex. When the yjl are restricted to be {0, 1}, that
is, deterministic labels, the coordinate descent procedure can be simplified. Let
Pc
m
Y
exp k=1 zijk (?jk + ?ikl )
Pc
pjl =
.
s=1 exp (?js + ?isl )
i=1
Pc
For any set of real-valued numbers {?jl | l=1 ?jl = 1, ?j, ?jl > 0, ?j, l}, we have the inequality
Pc
Pc
X
Y
n
c
n
m
X
X
exp k=1 zijk l=1 yjl (?jk + ?ikl )
Pc
Pc
=
ln
yjl pjl
deterministic labels
ln
l=1 yjl (?js + ?isl )
s=1 exp
j=1
j=1
i=1
l=1
n
c
n X
c
X X
X
yjl pjl
yjl pjl
=
ln
?jl
?
?jl ln
Jensen?s inequality
?jl
?jl
j=1
j=1
=
l=1
n
c
XX
l=1
n
c
XX
j=1 l=1
j=1 l=1
?jl ln(yjl pjl ) ?
?jl ln ?jl .
Plugging the last line into the Lagrangian L, we obtain an upper bound of L, called F . It can be
shown that we must have yjl = ?jl at any stationary point of F. Our optimization algorithm is a
coordinate descent minimization of this F [15, 7]. We initialize yjl with majority vote in Equation
(13). In each iteration step, we first optimize over ?jk and ?ikl in (14a), which can be solved by any
convex optimization procedure, and next optimize over yjl using a simple closed form in (14b). The
optimization over yjl is the same as applying Bayes? theorem where the result from the last iteration
is considered as a prior. This algorithm can be shown to produce only deterministic labels.
5
Algorithm 1 Minimax Entropy Learning from Crowds
input: {zijk } ? {0, 1}m?n?c , {?j } ? Rn+ , {?i } ? Rm
+
initialization:
Pm
z
0
Pc ijl
yjl
= Pm i=1
, ?j, l
i=1
k=1 zijk
for t = 1, 2, . . .
t
t
{?jk
, ?ikl
}
= arg min
?,?
m X
n X
c
X
i=1 j=1 l=1
n X
c
X
+
t
yjl
?
t?1
yjl
log
c
X
exp(?js + ?isl ) ?
s=1
m
c X
c
XX
2
?j ?jk
+
2
i=1
(13)
c
X
zijk (?jk + ?ikl )
k=1
2
?i ?ikl
j=1 k=1
k=1 l=1
Pc
m
t
t
Y
exp k=1 zijk (?jk + ?ikl
)
t?1
, ?j, l
Pc
yjl
t
t
s=1 exp ?js + ?isl
i=1
2
(14a)
(14b)
t
output: {yjl
}
5
Measurement Objectivity Principle
The measurement objectivity principle can be roughly stated as follows: (1) a comparison of labeling
confusability between two items should be independent of which particular workers are included for
the comparison; (2) symmetrically, a comparison of labeling expertise between two workers should
be independent of which particular items are included for the comparison. The first statement is
about the objectivity of item confusability. The second statement is about the objectivity of worker
expertise. In what follows, we mathematically define the measurement objectivity principle. For
deterministic labels, we show that the labeling model in Equation (7) can be recovered from the
measurement objectivity principle.
From Equation (7), given item j in class l, the probability that worker i labels it as class k is
exp (?jk + ?ikl )
.
?ijkl = Pc
s=1 exp (?js + ?isl )
(15)
Assume that a worker i has labeled two items j and j 0 both of which are from the same class l. With
respect to the given worker i, for each item, we measure the confusability for class k by
?ijkl
?ij 0 kl
?ijk =
, ?ij 0 k =
.
(16)
?ijll
?ij 0 ll
For comparing the item confusabilities, we compute a ratio between them. To maintain the objectivity of confusability, the ratio should not depend on whichever worker is involved in the comparison.
Hence, given another worker i0 , we should have
?ijkl
?ij 0 kl
?i0 jkl
?i0 j 0 kl
=
.
(17)
?ijll
?ij 0 ll
?i0 jll
?i0 j 0 ll
It is straightforward to verify that the labeling model in Equation (15) indeed satisfies the objectivity
requirement given by Equation (17). We can further show that a labeling model which satisfies
Equation (17) has to be expressed by Equation (15). Let us rewrite Equation (17) as
?ij 0 kl ?i0 jkl ?i0 j 0 ll
?ijkl
=
.
?ijll
?ij 0 ll ?i0 jll ?i0 j 0 kl
Without loss of generality, choose i0 = 0 and j 0 = 0 as the fixed references such that
?ijkl
?i0kl ?0jkl ?00ll
=
.
?ijll
?i0ll ?0jll ?00kl
(18)
Assume that the referenced worker 0 chooses a class uniformly at random for the referenced item 0.
So we have ?00ll = ?00kl = 1/c. Equation (18) implies ?ijkl ? ?i0kl ?0jkl . Reparameterizing with
6
(a) Norfolk Terrier
(c) Irish Wolfhound
(b) Norwich Terrier
(d) Scottish Deerhound
Figure 2: Sample images of four breeds of dogs from the Stanford dogs dataset
?i0kl = exp(?ikl ) and ?0jkl = exp(?jk ) (note that l is dropped since it is determined by j), we have
?ijkl ? exp(?jk + ?ikl ). The labeling model in Equation (15) has been recovered.
Symmetrically, we can also start from the objectivity of worker expertise to recover the labeling
model in (15). Assume that two workers i and i0 have labeled a common item j which is from class
l. With respect to the given item j, for each worker, we measure the confusion from class l to k by
?ijkl
?i0 jkl
?ijk =
, ?i0 jk =
.
(19)
?ijll
?i0 jll
For comparing the worker expertises, we compute a ratio between them. To maintain the objectivity
of expertise, the ratio should not depend on whichever item is involved in the comparison. Hence,
given another item j 0 in class l, we should have
?ijkl
?i0 jkl
?ij 0 kl
?i0 j 0 kl
=
.
(20)
?ijll
?i0 jll
?ij 0 ll
?i0 j 0 ll
We can see that Equation (20) is actually just a rearrangement of Equation (17).
6
Experimental Validation
We compare our method with majority voting and Dawid & Skene?s method [5] using real crowdsourcing data. One is multiclass image labeling, and the other is web search relevance judging.
6.1
Image Labeling
We chose the images of 4 breeds of dogs from the Stanford dogs dataset [8]: Norfolk Terrier (172),
Norwich Terrier (185), Irish Wolfhound (218), and Scottish Deerhound (232) (see Figure 2). The
numbers of the images for each breed are in the parentheses. There are 807 images in total. We
submitted them to MTurk, and received the labels from 109 MTurk workers. A worker labeled an
image at most once, and each image was labeled 10 times. It is difficult to evaluate a worker if
she only labeled few images. We thus only consider the workers who labeled at least 40 images,
which yields a label set that contains 7354 labels by 52 workers. Each image has at least 4 labels and around 95% of the images have at least 8 labels. The average accuracy of the workers
is 70.60%. The best worker achieved an accuracy of 88.24% while only labeled 68 images. The
worker who labeled the most labeled 345 images and achieved an accuracy of 68.99%. The average worker confusion matrix between breeds is shown in Table 2. As expected, it consists of
two blocks. One block contains Norfolk Terrier and Norwich Terrier, and the other block contains
Irish Wolfhound and Scottish Deerhound. For our method, the regularization parameters are set
as ?j = 100/(number of labels for item j), ?i = 100/(number of labels by worker i). The performance of various methods on this image labeling task is summarized in Table 1. For this problem,
our method is somewhat better than Dawid and Skene?s method.
6.2
Web Search Relevance Judging
In another experiment, we asked workers to rate a set of 2665 query-URL pairs on a relevance rating
scale from 1 to 5. The larger the rating, the more relevant the URL. The true labels were derived by
7
Method
Minimax Entropy
Dawid & Skene
Majority Voting
Average Worker
Dogs
84.63
84.14
82.09
70.60
Web
88.05
83.98
73.07
37.05
Norfolk
Norwich
Irish
Scottish
Table 1: Accuracy of methods (%)
Norfolk
71.04
31.99
1.19
1.20
Norwich
27.35
66.71
0.55
0.38
Irish
1.03
1.13
69.35
26.77
Scottish
0.58
0.18
28.91
71.65
Table 2: Average worker confusion (%)
using consensus from 9 experts. The noisy labels were provided by 177 nonexpert workers. Each
pair was judged by around 6 workers, and each worker judged a subset of the pairs. The average
accuracy of workers is 37.05%. Seventeen workers have an accuracy of 0 and they judged at most 7
pairs. The worker who judged the most judged 1225 pairs and achieved an accuracy of 76.73%. For
our method, the regularization parameters are set as ?j = 200/(number of labels for item j), ?i =
200/(number of labels by worker i). The performance of various methods on this relevance judging
task is summarized in Table 1. In this case, our method is substantially better.
7
Related Work
This paper can be regarded as a natural extension to Dawid and Skene?s work [5], discussed in Section 1. Our approach can be reduced to Dawid and Skene?s by setting the regularization parameters
to be ?j = ?, ?i = 0. The essential difference between our work and Dawid and Skene?s work is
that, in addition to worker expertise, we also take item confusability into account.
In computer vision, a minimax entropy method was proposed for estimating the probability density
of certain visual patterns such as textures [22]. The authors compute empirical marginal distributions
through various features, then construct a density model that can reproduce all empirical marginal
distributions. Among all models satisfying the constraints, the one with maximum entropy is preferred. However, one wants to select the features which are most informative: the constructed model
should approximate the underlying density by minimizing a KL divergence. The authors formulate
the combined density estimation and feature selection as a minimax entropy problem.
The measurement objectivity principle is inspired by the Rasch model [16], used to design and analyze psychological and educational measurements. In the Rasch model, given an examinee and a test
item, the probability of a correct response is modeled as a logistic function of the difference between
the examinee ability and the item difficulty. Rasch defined ?specific objectivity?: the comparison of
any two subjects can be carried out in such a way that no other parameters are involved than those
of the two subjects. The specific objectivity property of the Rasch model comes from the algebraic
separation of examinee and item parameters. If the probability of a correct response is modeled with
other forms, such as a logistic function of the ratio between the examinee ability and the item difficulty [21], objective measurements cannot be achieved. The most fundamental difference between
the Rasch model and our work is that we must infer ground truth, rather than take them as given.
8
Conclusion
We have proposed a minimax entropy principle for estimating the true labels from the judgements
of a crowd of nonexperts. We have also shown that the labeling model derived from the minimax
entropy principle uniquely satisfies an objectivity principle for measuring worker expertise and item
confusability. Experimental results on real-world crowdsourcing data demonstrate that the proposed
method estimates ground truth more accurately than previously proposed methods. The presented
framework can be easily extended. For example, in the web search experiment, the multilevel relevance scale is treated as multiclass. By taking the ordinal property of ratings into account, the
accuracy may be further improved. The framework could be extended to real-valued labels. A
detailed discussion on those topics is beyond the scope of this paper.
Acknowledgments
We thank Daniel Hsu, Xi Chen, Chris Burges and Chris Meek for helpful discussions, and Gabriella
Kazai for generating the web search dataset.
8
References
[1] Y. Altun and A. Smola. Unifying divergence minimization and statistical inference via convex
duality. In Proceedings of the 19th Annual Conference on Learning Theory, 2006.
[2] Amazon Mechanical Turk. https://www.mturk.com/mturk.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[4] CrowdFlower. http://crowdflower.com/.
[5] A. P. Dawid and A. M. Skene. Maximum likeihood estimation of observer error-rates using
the EM algorithm. Journal of the Royal Statistical Society, 28(1):20?28, 1979.
[6] O. Dekel and O. Shamir. Vox populi: Collecting high-quality labels from a crowd. In Proceedings of the 22nd Annual Conference on Learning Theory, 2009.
[7] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society, 39(1):1?38, 1977.
[8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 248?255, 2009.
[9] M. Dudik, S. J. Phillips, and R. E. Schapire. Maximum entropy density estimation with generalized regularization and an application to species distribution modeling. Journal of Machine
Learning Research, 8:1217?1260, 2007.
[10] S. Ertekin, H. Hirsh, and C. Rudin. Approximating the wisdom of the crowd. In Proceedings
of the Workshop on Computational Social Science and the Wisdom of Crowds, 2011.
[11] P. G. Ipeirotis, F. Provost, and J. Wang. Quality management on Amazon Mechanical Turk. In
Proceedings of the ACM SIGKDD Workshop on Human Computation, pages 64?67, 2010.
[12] E. Kamar, S. Hacker, and E. Horvitz. Combining human and machine intelligence in largescale crowdsourcing. In Proceedings of the 11th International Conference on Autonomous
Agents and Multiagent Systems, pages 467?474, 2012.
[13] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In
Advances in Neural Information Processing Systems 24, pages 1953?1961, 2011.
[14] G. Lebanon and J. Lafferty. Boosting and maximum likelihood for exponential models. In
Advances in Neural Information Processing Systems 14, pages 447?454, 2001.
[15] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse,
and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355?368.
Kluwer Academic, Dordrecht, MA, 1998.
[16] G. Rasch. On general laws and the meaning of measurement in psychology. In Proceedings
of the 4th Berkeley Symposium on Mathematical Statistics and Probability, volume 4, pages
321?333, Berkeley, CA, 1961.
[17] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning
from crowds. Journal of Machine Learning Research, 11:1297?1322, 2010.
[18] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective
labelling of venus images. In Advances in neural information processing systems, pages 1085?
1092, 1995.
[19] R. Snow, B. O?Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast?but is it good? Evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on
Empirical Methods in Natural Language Processing, pages 254?263, 2008.
[20] P. Welinder, S. Branson, S. Belongie, and P. Perona. The multidimensional wisdom of crowds.
In Advances in Neural Information Processing Systems 23, pages 2424?2432, 2010.
[21] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more:
optimal integration of labels from labelers of unknown expertise. In Advances in Neural Information Processing Systems 22, pages 2035?2043, 2009.
[22] S. C. Zhu, Y. N. Wu, and D. B. Mumford. Minimax entropy principle and its applications to
texture modeling. Neural Computation, 9:1627?1660, 1997.
9
| 4490 |@word eliminating:1 judgement:1 norm:5 nd:1 dekel:1 carry:1 contains:4 zij:4 karger:1 daniel:1 horvitz:1 subjective:1 recovered:2 com:3 comparing:2 must:4 written:2 john:1 jkl:7 informative:1 cheap:1 stationary:1 intelligence:1 rudin:1 item:50 ruvolo:1 provides:1 boosting:1 mathematical:1 constructed:1 symposium:1 prove:1 consists:1 combine:1 baldi:1 expected:1 indeed:1 roughly:1 inspired:1 company:1 considering:1 increasing:1 nonexpert:1 provided:1 xx:3 underlying:3 moreover:2 estimating:2 what:1 minimizes:2 substantially:2 unobserved:1 berkeley:2 collecting:1 voting:5 multidimensional:1 rm:1 platt:1 before:1 service:1 dropped:1 referenced:2 treat:1 hirsh:1 chose:1 initialization:1 collect:1 suggests:2 jurafsky:1 branson:1 acknowledgment:1 practice:1 block:3 implement:1 movellan:1 procedure:2 empirical:5 significantly:1 boyd:1 altun:1 cannot:2 close:1 selection:1 judged:5 applying:1 writing:1 optimize:3 www:1 deterministic:4 lagrangian:4 maximizing:2 straightforward:2 educational:1 independently:1 convex:4 formulate:1 amazon:3 simplicity:1 m2:1 regarded:1 vandenberghe:1 oh:1 coordinate:4 justification:3 autonomous:1 shamir:1 exact:1 smyth:1 dawid:10 peaky:1 jk:45 satisfying:1 recognition:1 labeled:14 database:1 observed:4 solved:2 wang:1 intuition:1 dempster:1 asked:1 depend:2 rewrite:1 easily:1 represented:4 various:3 norfolk:5 fast:1 describe:1 query:1 labeling:14 objectivity:14 choosing:3 crowd:10 dordrecht:1 whose:1 stanford:2 valued:2 larger:1 relax:2 otherwise:2 kamar:1 ability:2 statistic:1 think:1 jointly:3 noisy:5 breed:4 laird:1 advantage:1 propose:4 relevant:1 combining:1 yjl:62 requirement:1 produce:1 generating:2 incremental:1 dengyong:1 ij:36 z1n:1 minor:1 received:1 implemented:1 indicate:1 implies:1 come:1 rasch:6 kuhn:1 snow:1 correct:2 human:2 vox:1 multilevel:1 karush:1 mathematically:1 extension:1 hold:1 around:2 considered:1 ground:6 exp:19 scope:1 substituting:1 crowdflower:3 estimation:3 label:45 currently:1 minimization:2 reparameterizing:1 rather:2 zhou:1 derived:2 she:2 indicates:1 likelihood:3 check:1 rigorous:1 sigkdd:1 helpful:1 inference:2 i0:18 nonexperts:3 her:2 perona:2 reproduce:1 arg:1 dual:3 among:1 constrained:2 integration:1 fairly:1 initialize:1 marginal:2 once:1 construct:1 ng:1 irish:5 yu:1 few:2 simultaneously:1 divergence:4 individual:1 microsoft:3 maintain:2 rearrangement:1 interest:1 pc:19 primal:1 worker:62 experience:1 indexed:2 incomplete:1 theoretical:1 psychological:1 column:6 modeling:2 measuring:1 maximization:1 cost:1 entry:1 subset:2 jplatt:1 welinder:1 sumit:1 reported:1 chooses:1 combined:1 density:5 fundamental:2 international:1 probabilistic:3 dong:1 again:1 satisfied:1 management:1 choose:7 expert:3 zhao:1 valadez:1 li:2 account:2 summarized:2 z12:1 satisfy:1 depends:1 view:1 observer:1 closed:1 analyze:1 start:1 bayes:1 crowdsourced:1 recover:1 annotation:1 minimize:5 accuracy:8 who:3 yield:1 wisdom:4 ijkl:9 accurately:1 expertise:11 published:1 submitted:1 turk:3 tucker:1 obvious:1 z2n:1 naturally:1 proof:2 associated:1 involved:3 hsu:1 newly:1 dataset:3 seventeen:1 infers:1 actually:1 attained:1 xxx:1 response:3 specify:1 improved:1 kazai:1 formulation:1 populi:1 generality:1 just:1 smola:1 examinee:4 web:5 ikl:44 jll:5 zijk:16 logistic:2 quality:3 perhaps:1 verify:1 true:9 multiplier:3 burl:1 regularization:6 hence:2 leibler:1 neal:1 illustrated:1 ll:9 raykar:1 uniquely:1 generalized:1 ijl:1 demonstrate:1 confusion:7 performs:1 covert:1 meaning:1 image:17 novel:1 common:2 superior:1 empirically:1 volume:1 jl:11 discussed:1 interpretation:1 m1:1 kluwer:1 marginals:2 measurement:12 cambridge:1 connor:1 phillips:1 pm:2 language:2 access:1 labelers:1 j:8 bergsma:1 own:1 manipulation:2 certain:1 inequality:3 yi:1 minimum:4 relaxed:1 somewhat:1 dudik:1 deng:1 maximize:1 infer:3 z11:1 match:2 academic:1 dkl:2 plugging:2 parenthesis:1 underlies:1 variant:1 mturk:5 expectation:1 vision:2 iteration:2 achieved:4 addition:1 want:1 ertekin:1 subject:6 lafferty:1 jordan:1 symmetrically:2 constraining:2 split:1 easy:2 psychology:1 florin:1 idea:1 venus:1 knowing:2 multiclass:2 url:2 moy:1 algebraic:3 hacker:1 cause:1 generally:1 detailed:1 z22:1 rearranged:1 reduced:2 generate:2 http:2 schapire:1 shifted:1 judging:3 estimated:1 terrier:6 per:6 correctly:1 group:3 redundancy:2 four:1 drawn:1 prevent:2 verified:1 relaxation:1 sum:1 family:1 reasonable:1 wu:2 separation:1 confusing:1 bound:3 meek:1 annual:2 zmn:1 constraint:7 fei:2 sake:1 generates:1 min:3 fayyad:1 performing:1 skene:10 across:1 em:4 making:1 modification:1 intuitively:1 restricted:1 ln:18 equation:21 previously:2 slack:2 count:1 know:1 ordinal:1 whichever:2 unreasonable:1 apply:1 observe:1 hierarchical:1 away:1 shah:1 gabriella:1 assumes:1 graphical:1 unifying:1 approximating:1 society:2 tensor:2 objective:4 mumford:1 separate:1 link:1 thank:1 majority:6 chris:2 topic:1 collected:1 considers:1 consensus:1 modeled:2 ratio:5 minimizing:7 difficult:1 statement:2 trace:1 stated:1 whitehill:1 design:1 unknown:3 upper:3 observation:2 descent:4 extended:3 hinton:1 rn:1 provost:1 arbitrary:1 isl:8 rating:3 introduced:1 pair:6 mechanical:3 kl:12 dog:5 imagenet:1 beyond:1 redmond:1 usually:2 pattern:2 challenge:1 program:8 max:4 royal:2 reliable:1 confusability:8 natural:4 force:1 regularized:8 difficulty:2 treated:1 ipeirotis:1 largescale:1 mn:1 minimax:18 scheme:1 improve:1 zhu:1 carried:1 prior:1 literature:1 law:1 loss:5 expect:1 highlight:1 multiagent:1 scottish:5 validation:1 agent:1 gather:1 consistent:1 rubin:1 principle:17 editor:1 row:6 last:2 side:1 burges:1 basu:1 taking:1 sparse:1 penny:1 world:2 evaluating:1 author:2 simplified:1 far:1 social:1 lebanon:1 approximate:1 preferred:1 kullback:1 overfitting:2 kkt:1 belongie:1 xi:1 search:4 iterative:1 table:6 pjl:5 ca:1 noise:1 mao:1 inferring:2 exponential:2 theorem:3 specific:2 jensen:1 ments:1 essential:1 socher:1 workshop:2 texture:2 labelling:1 justifies:1 chen:1 entropy:40 visual:1 bogoni:1 lagrange:2 expressed:3 corresponds:2 truth:7 satisfies:3 acm:1 ma:1 goal:1 z21:1 change:1 included:2 typical:1 norwich:5 uniformly:1 determined:1 justify:1 likeihood:1 lemma:3 called:1 total:1 specie:1 duality:1 experimental:4 ijk:56 vote:3 select:2 relevance:5 evaluate:1 crowdsourcing:11 |
3,856 | 4,491 | Efficient Sampling for Bipartite Matching Problems
Richard S. Zemel
University of Toronto
[email protected]
Maksims N. Volkovs
University of Toronto
[email protected]
Abstract
Bipartite matching problems characterize many situations, ranging from ranking
in information retrieval to correspondence in vision. Exact inference in realworld applications of these problems is intractable, making efficient approximation methods essential for learning and inference. In this paper we propose
a novel sequential matching sampler based on a generalization of the PlackettLuce model, which can effectively make large moves in the space of matchings.
This allows the sampler to match the difficult target distributions common in
these problems: highly multimodal distributions with well separated modes. We
present experimental results with bipartite matching problems?ranking and image correspondence?which show that the sequential matching sampler efficiently
approximates the target distribution, significantly outperforming other sampling
approaches.
1
Introduction
Bipartite matching problems (BMPs), which involve mapping one set of items to another, are ubiquitous, with applications ranging from computational biology to information retrieval to computer
vision. Many problems in these domains can be expressed as a bipartite graph, with one node for
each of the items, and edges representing the compatibility between pairs.
In a typical BMP a set of labeled instances with target matches is provided together with feature
descriptions of the items. The features for any two items do not provide a natural measure of
compatibility between the items, i.e., should they be matched or not. Consequently the goal of
learning is to create a mapping from the item features to the target matches such that when an
unlabeled instance is presented the same mapping can be applied to accurately infer the matches.
Probabilistic formulations of this problem, which involve specifying a distribution over possible
matches, have become increasingly popular, e.g., [23, 26, 1], and these models have been applied to
problems ranging from preference aggregation in social choice and information retrieval [7, 13] to
multiple sequence protein alignment in computational biology [24, 27].
However, exact learning and inference in real-world applications of these problems quickly become
intractable because the state space is typically factorial in the number of items. Approximate inference methods are also problematic in this domain. Variational approaches, in which aspects of the
joint distribution are treated independently, may miss important contingencies in the joint. On the
other hand sampling is hard, plagued by the multimodality and strict constraints inherent in discrete
combinatorial spaces.
Recently there has been a flurry of new methods for sampling for bipartite matching problems.
Some of these have strong theoretical properties [10, 9], while others are appealingly simple [6, 13].
However, to the best of our knowledge, even for simple versions of bipartite matching problems,
no efficient sampler exists. In this paper we propose a novel Markov Chain Monte Carlo (MCMC)
method applicable to a wide subclass of BMPs. We compare the efficiency and performance of our
sampler to others on two applications.
1
2
Problem Formulation
A standard BMP consists of the two sets of N items U = {u1 , ..., uN } and V = {v1 , ..., vN }. The
goal is to find an assignment of the items so that every item in U is matched to exactly one item in V
and no two items share the same match. In this problem an assignment corresponds to a permutation
? where ? is a bijection {1, ..., N } ? {1, ..., N }, mapping each item in U to its match in V ; we
use the terms assignment and permutation interchangeably. We define ?(i) = j to denote the index
of a match v?(i) = vj for item ui in ? and use ? ?1 (j) = i to denote the reverse. Permutations
have a useful property that any subset of the permutation also constitutes a valid permutation with
respect to the items in the subset. We will utilize this property in later sections; here we introduce
the notation. Given a full permutation ? we define ?1:t (?1:0 = ?) as a partial permutation of only
the first t items in U .
To express uncertainty over assignments, we use the standard Gibbs form to define the probability
of a permutation ?:
P (?|?) =
1
exp(?E(?, ?))
Z(?)
Z(?) =
X
exp(?E(?, ?))
(1)
?
where ? is the set of model parameters and E(?, ?) is the energy. We assume, without loss of
generality, that the energy E(?, ?) is given by a sum of single and/or higher order potentials.
Many important problems can be formulated in this form. For example, in information retrieval the
crucial problem of learning a ranking function can be modeled as a BMP [12, 26]. In this domain
U corresponds to a set of documents and V to a set of ranks. The energy of a given assignment
is typically formulated as a combination of ranks and the model?s output from the query-document
features. For example in [12] the energy is defined as:
E(?, ?) = ?
N
1 X
?i (N ? ?(i) + 1)
N i=1
(2)
where ?i is a score assigned by the model to ui . Similarly, in computer vision the problem of finding
a correspondence between sets of images can be expressed as a BMP [5, 3, 17]. Here U and V are
typically sets of points in the two images and the energy is defined on the feature descriptors of these
points. For example in [17] the energy is given by:
E(?, ?) =
N
1 X
v
?, (?iu ? ??(i)
)2
|?| i=1
(3)
v
where ?iu and ??(i)
are feature descriptors for points ui and v?(i) . Finally, some clustering problems
can also be expressed in the form of a BMP [8]. It is important to note here that for all models where
the energy is additive we can compute the energy E(?1:t , ?) for any partial permutation ?1:t by
summing the potentials only
over the t assignments
in ?1:t . For instance for the energy in Equation
E
Pt D
1
u
v
2
3, E(?1:t , ?) = |?| i=1 ?, (?i ? ??1:t (i) ) with E(?1:0 , ?) = 0.
Learning in these models typically involves maximizing the log probability of the correct match as a
function of ?. To do this one generally needs to find the gradient of the log probability with respect
(?|?))
to ?: ? log(P
= ? ?E(?,?)
? ? log(Z(?))
. Unfortunately, computing the gradient with respect to
??
??
??
the partition function requires a summation over N ! valid assignments, which very quickly becomes
intractable. For example for N = 20 finding ? log(Z(?))
requires over 1017 summations. Thus
??
effective approximation techniques are necessary to learn such models.
A particular instance of BMP that has been studied extensively is the maximum weight bipartite
matching problem (WBMP). In WBMP the energy is reduced to only the single potential ?:
E unary (?, ?) =
X
?(ui , v?(i) , ?)
(4)
i
Equations 2 and 3 are both examples of WBMP energies. Finding the assignment with the maximum energy is tractable and can be solved in O(N 3 ) [16]. Determining the partition function in
a WBMP is equivalent to finding the permanent of the edge weight matrix (defined by the unary
potential), a well-known #P problem [25]. The majority of the proposed samplers are designed for
2
WBMPs and cannot be applied to the more general BMPs where the energy includes higher order
potentials. However, distributions based on higher order potentials allow greater flexibility and have
been actively used in problems ranging from computer vision and robotics [20, 2] to information
retrieval [19, 26]. There is thus an evident need to develop an effective sampler applicable to any
BMP distribution.
3
Related Approaches
In this section we briefly describe existing sampling approaches, some of which have been developed
specifically for bipartite matching problems while others come from matrix permanent research.
3.1
Gibbs Sampling
Gibbs and block-Gibbs sampling can be applied straightforwardly to sample from distributions defined by Equation 1. To do that we start with some initial assignment ? and consider a subset of
items in U ; for illustration purposes we will use two items ui and uj . Given the selected subset of
items the Gibbs sampler considers all possible assignment swaps within this subset. In our example
there are only two possibilities: leave ? unchanged or swap ?(i) with ?(j) to produce a new permutation ? 0 . Conditioned on the assignment of all the other items in U that were not selected, the
probability of each permutation is:
p(? 0 |?\{i,j} ) =
exp(?E(? 0 , ?))
exp(?E(?, ?)) + exp(?E(? 0 , ?))
p(?|?\{i,j} ) = 1 ? p(? 0 |?\{i,j} )
where ?\{i,j} is permutation ? with ui and uj removed. We sample using these probabilities to
either stay at ? or move to ? 0 , and repeat the process.
Gibbs sampling has been applied to a wide range of energy-based probabilistic models. It is often
found to mix very slowly and to get trapped in local modes [22]. The main reason for this is
that the path from one probable assignment to another using only pairwise swaps is likely to go
through regions that have very low probability [5]. This makes it very unlikely that those moves
will be accepted, which typically traps the sampler in one mode. Thus, the local structure of the
Gibbs sampler is likely to be inadequate for problems of the type considered here, in which several
probable assignments will produce well-separated modes.
3.2
Chain-Based Approaches
Chain-based methods extend the assignment swap idea behind the Gibbs sampler to generate samples more efficiently from WBMP distributions. Instead of randomly choosing subsets of items to
swap, chain-based method generate a sequence (chain) of interdependent swaps. Given a (random)
starting permutation ?, an item ui (currently matched with v?(i) ) is selected at random and a new
match vj is proposed with probability p(ui , vj |?) where p depends on the unary potential ?(ui , vj , ?)
in the WBMP energy (see Equation 4). Now, assuming that the match {ui , vj }, is selected, matches
{ui , v?(i) } and {u??1 (j) , vj } are removed from ? and {ui , vj } is added to make ? 0 . After this
change u??1 (j) and v?(i) are no longer matched to any item so ? 0 is a partial assignment. The procedure then finds a new match for u??1 (j) using p. This chain-like match sampling is repeated either
until ? 0 is a complete assignment or a termination criteria is reached.
Several chain-based methods have been proposed including the chain flipping approach [5] and the
Markov Chain approach [11]. Dellaert et al., [5] empirically demonstrated that the chain flipping
sampler can mix better than the Gibbs sampler when applied to multimodal distributions. However,
chain-based methods also have several drawbacks that significantly affect their performance. First,
unlike the Gibbs sampler which always maintains a valid assignment, the intermediate assignments
? 0 in chain-based methods are incomplete. This means that the chain either has to be run until a valid
assignment is generated [5] or terminated early and produce an incomplete assignment [11]. In the
first case the sampler has a non-deterministic run-time whereas in the second case the incomplete
assignment can not be taken as a valid sample from the model. Finally, to the best of our knowledge
no chain-based method can be applied to general BMPs because they are specifically designed for
E unary (see Equation 4).
3
(a) t = 0
(b) t = 1
(c) t = 2
(d) t = 3
Figure 1: Top row: Plackett-Luce generative process viewed as rank matching. Bottom row: sequential matching procedure. Items are U = {u1 , u2 , u3 } and V = {v1 , v2 , v3 }; the reference permutation is ? = {2, 3, 1}.
The proposed matches are shown in red dotted arrows and accepted matches in black arrows.
3.3
Recursive Partitioning Algorithm
The recursive partitioning [10] algorithm was developed to obtain exact samples from the distribution for WBMP. This method is considered to be the state-of-the-art in matrix permanent research
and to the beset of our knowledge has the lowest expected run time. Recursive partitioning proceeds
by splitting the space of all valid assignments ? into K subsets ?1 , ..., ?K with corresponding partition functions Z1 , ..., ZK . It then samples one of these subsets and repeats the partitioning procedure
recursively, generating exact samples from a WBMP distribution.
Despite strong theoretical guarantees the recursive partitioning procedure has a number of limitations that significantly affect its applicability. First, the running time of the sampler is nondeterministic as the algorithm has to be restarted every time a sample falls outside of ?. The
probability of restart increases with N which is an undesirable property especially for training large
models where one typically needs to have precise control over the time spent in each training phase.
Moreover, this algorithm is also specific to WBMP and cannot be generalized to sample from arbitrary BMP distributions with higher order potentials.
3.4
Plackett-Luce Model
Our proposed sampling approach is based on a generalization of the well-established Plackett-Luce
model [18, 14], which is a generative model for permutations. Given a set of items V = {v1 , ..., vN },
a Plackett-Luce model is parametrized by a set of weights (one per item) W = {w1 , ..., wN }. Under
this model a permutation ? is generated by first selecting item v?(1) from the set of N items and
placing it in the first position, then selecting v?(2) from the remaining N ? 1 items and placing it
the second position, and so on until all N items are placed. The probability of ? under this model is
given by:
exp(w?(2) )
exp(w?(N ) )
exp(w?(1) )
? PN
? ... ?
Q(?) = PN
exp(w?(N ) )
i=1 exp(w?(i) )
i=2 exp(w?(i) )
(5)
exp(w?(t) )
P
exp( N
i=t w?(i) )
is the probability of choosing the item v?(t) out of the N ? t + 1 remaining
P
items. It can be shown that Q is a valid distribution with ? Q(?) = 1. Moreover, note that it is
very easy to draw samples from Q by applying the sequential procedure described above. In the next
section we show how this model can be generalized to draw samples from any BMP distribution.
Here
4
Sampling by Sequentially Matching Vertices
In this section we introduce a class of proposal distributions that can be effectively used in conjunction with the Metropolis-Hastings algorithm to obtain samples from a BMP distribution. Our
approach is based on the observation that the sequential procedure behind the Plackett-Luce model
can also be extended to generate matches between item sets. Instead of placing items into ranked
positions we can think of the Plackett-Luce generative process as sequentially matching ranks to the
items in V , as illustrated in the top row of Figure 1. To generate the permutation ? = {3, 1, 2} the
Plackett-Luce model first matches rank 1 with v?(1) = v2 then rank 2 with v?(2) = v3 and finally
rank 3 with v?(3) = v1 . Taking this one step further we can replace ranks with a general item set
4
U and repeat the same process. Unlike ranks, items in U do not have a natural order so we use a
reference permutation ?, which specifies the order in which items in U are matched. We refer to
this procedure as sequential matching. The bottom row of Figure 1 illustrates this process.
Formally the sequential matching process proceeds as follows: given some reference permutation ?,
we start with an empty assignment ?1:0 = ?. Then at each iteration t = 1, ..., N the corresponding
item u?(t) gets matched with one of the V \ ?1:t?1 items, where V \ ?1:t?1 = {vjt , ..., vjN }
denotes the set of items not matched in ?1:t?1 . Note that similarly to the Plackett-Luce model,
|V \ ?1:t?1 | = N ? t + 1 so at each iteration, u?(t) will have N ? t + 1 left over items in V \ ?1:t?1
to
Pmatch with. We define the conditional probability of each such match to be p(vj |u?(t) , ?1:t?1 ),
vj ?V \?1:t?1 p(vj |u?(t) , ?1:t?1 ) = 1. After N iterations the permutation ?1:N = ? is produced
with probability:
N
Y
p(v?(?(t)) |u?(t) , ?1:t?1 )
Q(?|?) =
(6)
t=1
where v?(?(t)) is a match for u?(t) in ?. The conditional match probabilities depend on both the
current item u?(t) and on the partial assignment ?1:t?1 . Introducing this dependency generalizes the
Plackett-Luce model which only takes into account that the items in ?1:t?1 are already matched but
does not take into account how these items are matched. This dependency becomes very important
when the energy contains pairwise and/or higher order potentials as it allows us to compute the
change in energy for each new match, in turn allowing for close approximations to the target BMP
distribution.
We can show that the distribution Q defined by the p?s is a valid distribution over assignments:
Proposition 1PFor any reference permutation ? and any choice of matching probabilities
that satisfy
vj ?V \?1:t?1 p(vj |u?(t) , ?1:t?1 ) = 1, the distribution given by: Q(?|?) =
QN
1
t=1 p(v?(?(t)) |u?(t) , ?1:t?1 ) is a valid probability distribution over assignments.
The important consequence of this proposition is that it allows us to work with a very rich class of
matching probabilities with arbitrary dependencies and still obtain a valid distribution over assignments with a simple way to generate exact samples from it. This opens many avenues for tailoring
proposal distributions for MCMC applications to specific BMPs. In the next section we propose one
such approach.
4.1
Proposal Distribution
Given the general matching probabilities the goal is to define them so that the resulting proposal
distribution Q matches the target distribution as closely as possible. One potential way of achieving
this is through the partial energy E(?1:t , ?) (see Section 2). The partial energy ignores all the items
that are not matched in ?1:t and thus provides an estimate of the ?current? energy at each iteration t.
Using partial energies we can also find the changes in energy when a given item is matched. Given
that our goal is to explore low-energy (high-probability) modes we define the matching probabilities
as:
p(vj |u?(t) , ?1:t?1 ) =
Zt (u?(t) , ?1:t?1 ) =
exp(?E(H(vj , u?(t) , ?1:t?1 ), ?))
Zt (u?(t) , ?1:t?1 )
X
exp(?E(H(vj , u?(t) , ?1:t?1 ), ?))
(7)
vj ?V \?1:t?1
where H(vj , u?(t) , ?1:t?1 ) is the resulting partial assignment after match {u?(t) , vj } is added to
?1:t?1 . The normalizing constant Zt ensures that the probabilities sum to 1, which is the necessary
condition for Proposition 1 to apply. It is useful to rewrite the matching probabilities as:
exp(?E(H(vj , u?(t) , ?1:t?1 ), ?) + E(?1:t?1 , ?))
Zt? (u?(t) , ?1:t?1 )
X
Zt? (u?(t) , ?1:t?1 ) =
exp(?E(H(vj , u?(t) , ?1:t?1 ), ?) + E(?1:t?1 , ?))
p(vj |u?(t) , ?1:t?1 ) =
vj ?V \?1:t?1
Adding E(?1:t?1 , ?) to each item?s energy does not change the probabilities because this term cancels out during normalization (but it does change the partition function, denoted by Zt? here). However, in this form we see that p(vj |u?(t) , ?1:t?1 ) is directly related to the change in the partial energy
1
The proof is in the supplementary material.
5
from ?1:t?1 to H(vj , u?(t) , ?1:t?1 ) ? the larger the change the bigger the resulting probability will
be. Thus, the matching choices will be made solely based on the changes in the partial energy.
Reorganizing the terms yields the proposal distribution:
Q(?|?) =
exp(?E(?1:1 , ?) + E(?1:0 , ?))
exp(?E(?1:N , ?) + E(?1:N ?1 , ?))
exp(?E(?, ?))
? ... ?
=
?
Z1? (u?(1) , ?1:0 )
ZN
Z ? (?, ?)
(u?(N ) , ?1:N ?1 )
Here Z ? (?, ?) is the normalization factor which depends both on the reference permutation ? and
the generated assignment ?. The resulting proposal distribution is essentially a renormalized version
of the target distribution. The numerator remains the exponent of the energy but the denominator
is no longer a constant; rather it is a function which depends on the generated assignment and the
reference permutation. Note that the proposal distribution defined above can be used to generate
samples for any target distribution with arbitrary energy consisting of single and/or higher order
potentials. To the best of our knowledge aside from the Gibbs sampler this is the only sampling
procedure that can be applied to arbitrary BMP distributions.
4.2
Temperature and Chain Properties
Acceptance rate, a key property of any sampler, is typically controlled by a parameter which
either shrinks or expands the proposal distribution. To achieve this effect with the sequential matching model we introduce an additional parameter ? which we refer to as temperature:
p(vj |u?(t) , ?1:t?1 , ?) ? exp(?E(H(vj , u?(t) , ?1:t?1 ), ?)/?). Decreasing ? leads to sharp proposal
distributions typically highly skewed towards one specific assignment, while increasing ? makes the
proposal distribution approach the uniform distribution. By adjusting ? we can control the range of
the proposed moves therefore controlling the acceptance rate.
To ensure that the SM sampler converges to the required distribution we demonstrate that it satisfies
the three requisite properties: detailed balance, ergodicity, and aperiodicity [15]. The detailed balance condition is satisfied because every Metropolis-Hastings algorithm satisfies detailed balance.
Ergodicity follows from the fact that the insertion probabilities are always strictly greater than 0.
Therefore any ? is reachable from any ? in one proposal cycle. Finally, aperiodicity follows from
the fact that the chain allows self-transitions.
4.3
Reference Permutation
Fixing the reference permutation ? yields a Algorithm 1 Sequential Matching (SM)
state independent sampler. Empirically we
Input: ?, M , ?
found that setting ? to the MAP permutation
for m = 1 to M do
gives good performance for WBMP problems.
Initialize ?1:0 = ?
However, for the general energy based distrifor t = 1 to N do {generate sample from Q(?|?)}
butions considered here finding the MAP state
Find a match vj for u?(t) using:
can be very expensive and in many cases inp(vj |u?(t) , ?1:t?1 , ?)
tractable. Moreover, even if MAP can be found
Add {u?(t) , vj } to ?1:t?1 to get ?1:t
efficiently there is still no guarantee that using it
end for
as the reference permutation will lead to a good
Calculate forward probability:
Q
sampler. To avoid these problems we use a state
Q(?|?) = N
t=1 p(v?(?(t)) |u?(t) , ?1:t?1 , ?)
Calculate backward probability:
dependent sampler where the reference permuQN
Q(?|?) = t=1 p(v?(?(t)) |u?(t) , ?1:t?1 , ?)
tation ? is updated every time a sample gets accepted. In the matching example (bottom row
then
if U nif orm(0, 1) < exp(?E(?,?))Q(?|?)
exp(?E(?,?))Q(?|?)
of Figure 1) if the new match at t = 3 is ac???
end if
cepted then ? would be updated to {3, 1, 2}.
end for
Empirically we found the state dependent samReturn: ?
pler to be more stable, with consistent performance across different random initializations of the reference permutation. Algorithm 1 summarizes
the Metropolis-Hastings procedure for the state dependent sequential matching sampler.
5
Experiments
To test the sequential matching sampling approach we conducted extensive experiments. We considered document ranking and image matching, two popular applications of BMP; and for the sake of
6
Table 1: Average Hellinger distances for learning to rank (left half) and image matching (right half) problems.
Statistically significant results are underlined. Note that Hellinger distances for N = 8 are not directly comparable to those for N = 25, 50 since approximate normalization is used for N > 8. For N = 50 we were
unable to get a single sample from the RP sampler for any c in the allocated time limit (over 5 minutes).
Learning to Rank
N = 8:
GB
CF
RP
SM
N = 25:
GB
CF
RP
SM
N = 50:
GB
CF
SM
Image Matching
c = 20
c = 40
c = 60
c = 80
c = 100
c = 0.2 c = 0.4 c = 0.6 c = 0.8
c=1
0.7948
0.9012
0.7945
0.7902
0.6211
0.8987
0.6209
0.6188
0.4635
0.8887
0.4629
0.4636
0.4218
0.8714
0.4986
0.4474
0.3737
0.8748
0.3734
0.3725
0.9108
0.9112
0.9110
0.9109
0.8868
0.8882
0.8870
0.8866
0.8320
0.8336
0.8312
0.8307
0.7616
0.7672
0.7623
0.7621
0.6533
0.6623
0.6548
0.6557
0.9533
0.9767
0.9533
0.1970
0.9728
0.9990
0.9728
0.1937
0.9646
0.9937
0.9694
0.2899
0.9449
0.9953
0.9462
0.4166
0.9486
0.9781
0.9673
0.3858
0.7246
0.7243
0.7279
0.7234
0.8669
0.8675
0.9788
0.8471
0.9902
0.9904
0.9896
0.8472
0.9960
0.9950
0.9988
0.6350
0.9976
0.9807
0.9969
0.5576
0.9983 0.9991 0.9988 0.9974
0.9841 0.9995 0.9993 0.9906
0.1617 0.2335 0.3462 0.4931
0.9985
0.9305
0.4895
0.6949 0.9646 1.0000 1.0000 1.0000
0.6960 0.9635 1.0000 1.0000 0.9992
0.6941 0.9243 0.7016 0.3550 0.1677
comparison we concentrated on WBMP, as most of the methods cannot be applied to general BMP
problems. When comparing the samplers we concentrated on evaluating how well the Monte Carlo
estimates of probabilities produced by the samplers approximate the true distribution P . When target probabilities are known this method of evaluation provides a good estimate of performance since
the ultimate goal of any sampler is to approximate P as closely as possible.
For all experiments the Hellinger distance was used to compare the true distributions with the approximations produced by samplers. We chose this metric because it is symmetric and bounded.
Furthermore it avoids the log(0) problems that arise in cross entropy
P measures. For any two distributions P and Q the Hellinger distance is given by D = (1 ? ( ? P (?)Q(?))1/2 )1/2 . Note that
0 ? D ? 1 where 0 indicates that P = Q. Computing D exactly quickly becomes intractable as the
number of items grows. To overcome this problem we note that if a given permutation ? is not generated by any of the samplers then the term P (?)Q(?) is 0 and does not affect the resulting estimate
of D for any sampler. Hence we can locally approximate D up to a constant for all samplers by
, where ?? is the union of all distinct perchanging Equation 1 to: P (?|?) ? P 0 exp(?E(?,?))
0
? ??? exp(?E(? ,?)
mutations produced by the samplers. The Hellinger distance is then estimated with respect to ?? .
For all experiments we ran the samplers on small (N = 8), medium (N = 25) and large (N = 50)
scale problems. The sampling chains for each method were run in parallel using 4 cores; the use of
multiprocessor boards such as GPUs allows our method to scale to large problems. We compare the
SM approach with Gibbs (GB), chain flipping (CF) and recursive partitioning (RP) samplers. To run
RP we used the code available from the author?s webpage. These methods cover all of the primary
leading approaches in WBMP and matrix permanent research.
Since any valid sampler will eventually produce samples from the target distribution, we tested
the methods with short chain lengths. This regime also simulates real applications of the methods
where, due to computational time limits, the user is typically unable to run long chains. Note that
this is especially relevant if the distributions are being sampled as an inner loop during parameter
optimization. Furthermore, to make comparisons fair we used the block GB sampler with the block
size of 7 (the largest computationally feasible size) as the reference point. We used 2N swaps for
each GB chain, setting the number of iterations for other methods to match the total run time for GB
(for all experiments the difference in running times between GB and SM did not exceed 10%). The
run times of the CF and RP methods are difficult to control as they are non-deterministic. To deal
with this we set an upper-bound on the running time (consistent with the other methods) after which
CF and RP were terminated. Finally, the temperature for SM was chosen in the [0.1, 1] interval to
keep the acceptance rate approximately between 20% and 60%.
5.1
Learning to Rank
For a learning to rank problem we used the Yahoo! Learning To Rank dataset [4]. For each query
the distribution over assignments was parametrized by the energy given in Equation 2. Here ?i
7
is the output of the neural network scoring function trained on query-document features. After
pretraining the network on the full dataset we randomly selected 50 queries with N = 8, 25, 50
documents and used GB, CF, RP and SM methods to generate 1000 samples for each query. To gain
insight into sampling accuracy we experimented with different distribution shapes by introducing
an additional scaling constant c so that P (?|?, c) ? exp(?c ? E(?, ?)). In this form c controls the
?peakiness? of the distribution with large values resulting in highly peaked distributions; we used
c ? {20, 40, 60, 80, 100}.
The left half of Table 1 shows Hellinger distances for N = 8, 25, 50, averaged across the 50 queries.2
From the table it is seen that all the samplers perform equally well when the number of items is small
(N = 8). However, as the number of items increases SM significantly outperforms all other samplers. Throughout experiments we found that the CF and RP samplers often reached the allocated
time limit and had to be forced to terminate early. For N = 50 we were unable to get a single
sample from the RP sampler after running it for over 5 minutes. This is likely due to the fact that
at each matching step t = 1, ..., N the RP sampler has a non-zero probability of failing (rejecting).
Consequently the total rejection probability increases linearly with the number of items N . Even
for N = 25 we found the RP sampler to reject over 95% of the time. This further suggests that approaches with non-deterministic run times are not suitable for this problem because their worst-case
performance can be extremely slow. Overall, the results indicate that SM can produce higher quality
samples more rapidly, a crucial property for learning large-scale models.
5.2
Image Matching
For an image matching task we followed the framework of Petterson et al. [17]. Here, we used the
Giraffe dataset [21] which is a video sequence of a walking giraffe. From this data we randomly selected 50 pairs of images that were at least 20 frames apart. Using the available set of 61 hand labeled
points we then randomly selected three sets of correspondence points for each image pair, containing
8, 25 and 50 points respectively, and extracted SIFT feature descriptors at each point. The target distribution over matchings was parametrized by the energy given by Equation 3 where ??s are the SIFT
feature descriptors. We also experimented with different scale settings: c ? {0.2, 0.4, 0.6, 0.8, 1}.
Figure 2 shows an example pair of images with 25 labeled points and the inferred MAP assignment.
The results for N = 8, 25, 50 are shown in in
the right half of Table 1. We see that when
the distributions are relatively flat (c < 0.6) all
samplers have comparable performance. However, as the distributions become sharper with
several well defined modes (c ? 0.6), the
SM sampler significantly outperforms all other
samplers. As mentioned above, when the dis- Figure 2: Example image pair with N = 25. Green
tribution has well defined modes the path from lines show the inferred MAP assignment.
one mode to the other using only local swaps is likely to go through low probability modes. This is
the likely cause of the poor performance of the GB and CF samplers as both samplers propose new
assignments through local moves. As in the learning to rank experiments, we found the rejection
rate for the RP sampler to increase significantly for N ? 25. We were unable to obtain any samples
in the allocated time (over 5 mins) from the RP sampler for N = 50. Overall, the results further
show that the SM method is able to generate higher quality samples faster than the other methods.
6
Conclusion
In this paper we introduced a new sampling approach for bipartite matching problems based on a
generalization of the Plackett-Luce model. In this approach the matching probabilities at each stage
are conditioned on the partial assignment made to that point. This global dependency allows us
to define a rich class of proposal distributions that accurately approximate the target distribution.
Empirically we found that our method is able to generate good quality samples faster and is less
prone to getting stuck in local modes. Future work involves applying the sampler during inference
while learning BMP models. We also plan to investigate the relationship between the proposal
distribution produced by sequential matching and the target one.
2
Trace and Hellinger distance plots (for both experiments) are in the supplementary material.
8
References
[1] A. Bouchard-Cote and M. I. Jordan. Variational inference over combinatorial spaces. In NIPS,
2010.
[2] C. Cadena, D. Galvez-Lopez, F. Ramos, J. D. Tardos, and J. Neira. Robust place recognition
with stereo cameras. In IROS, 2010.
[3] T. S. Caetano, L. Cheng, Q. V. Le, and A. J. Smola. Learning graph matching. In ICML, 2009.
[4] O. Chapelle, Y. Chang, and T.-Y. Liu. The Yahoo! Learning to Rank Challenge. 2010.
[5] F. Dellaert, S. M. Seitz, C. E. Thorpe, and S. Thrun. EM, MCMC, and chain flipping for
structure from motion with unknown correspondence. Machine Learning, 50, 2003.
[6] J.-P. Doignon, A. Pekec, and M. Regenwetter. The repeated insertion model for rankings:
Missing link between two subset choice models. Psychometrika, 69, 2004.
[7] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation methods for the web. In
WWW, 2001.
[8] B. Huang and T. Jebara. Loopy belief propagation for bipartite maximum weight b-matching.
In AISTATS, 2007.
[9] J. Huang, C. Guestrin, and L. Guibas. Fourier theoretic probabilistic inference over permutations. Machine Learning Research, 10, 2009.
[10] M. Huber and J. Law. Fast approximation of the permanent for very dense problems. In SODA,
2008.
[11] M. Jerrum, A. Sinclair, and E. Vigoda. A polynomial-time approximation algorithm for the
permanent of a matrix with non-negative entries. 2004.
[12] Q. V. Le and A. Smola. Direct optimization of ranking measures. In arxiv: 0704.3359, 2007.
[13] T. Lu and C. Boutilier. Learning Mallows models with pairwise preferences. In ICML, 2011.
[14] R. D. Luce. Individual choice behavior: A theoretical analysis. Wiley, 1959.
[15] R. M. Neal. Probabilistic inference using Markov Chain Monte Carlo methods. Technical
report, University of Toronto, 1993.
[16] C. H. Papadimitriou and K. Steiglitz. Combinatorial optimization: Algorithms and complexity.
Prentice-Hall, 1982.
[17] J. Petterson, T. S. Caetano, J. J. McAuley, and J. Yu. Exponential family graph matching and
ranking. In NIPS, 2009.
[18] R. Plackett. The analysis of permutations. Applied Statistics, 24, 1975.
[19] T. Qin, T.-Y. Liu, X.-D. Zhang, D.-S. Wang, and H. Li. Global ranking using continuous
conditional random fields. In NIPS, 2008.
[20] F. Ramos, D. Fox, and H. Durrant-Whyte. CRF-Matching: Conditional random fields for
feature-based scan matching. In Robotics: Science and Systems, 2007.
[21] D. A. Ross, D. Tarlow, and R. S. Zemel. Learning articulated structure and motion. International Journal on Computer Vision, 88, 2010.
[22] R. Salakhutdinov. Learning deep Boltzmann machines using adaptive MCMC. In ICML, 2010.
[23] M. Taylor, J. Guiver, S. Robertson, and T. Minka. Softrank: Optimizing non-smooth rank
metrics. In WSDM, 2008.
[24] W. R. Taylor. Protein structure comparison using bipartite graph matching and its application
to protein structure classification. In Molecular Cell Proteomics, 2002.
[25] L. G. Valiant. The complexity of computing the permanent. Theoretical Computer Science, 8,
1979.
[26] M. N. Volkovs and R. S. Zemel. Boltzrank: Learning to maximize expected ranking gain. In
ICML, 2009.
[27] Y. Wang, F. Makedon, and J. Ford. A bipartite graph matching framework for finding correspondences between structural elements in two proteins. In IEBMS, 2004.
9
| 4491 |@word briefly:1 version:2 polynomial:1 open:1 termination:1 seitz:1 recursively:1 mcauley:1 initial:1 liu:2 contains:1 score:1 selecting:2 document:5 outperforms:2 existing:1 current:2 comparing:1 additive:1 partition:4 tailoring:1 shape:1 designed:2 plot:1 aside:1 generative:3 selected:7 half:4 item:52 core:1 short:1 tarlow:1 provides:2 node:1 toronto:5 preference:2 bijection:1 zhang:1 direct:1 become:3 lopez:1 consists:1 naor:1 nondeterministic:1 multimodality:1 hellinger:7 introduce:3 pairwise:3 huber:1 expected:2 behavior:1 salakhutdinov:1 wsdm:1 decreasing:1 reorganizing:1 increasing:1 becomes:3 provided:1 psychometrika:1 matched:11 notation:1 moreover:3 bounded:1 medium:1 lowest:1 appealingly:1 developed:2 finding:6 guarantee:2 every:4 subclass:1 expands:1 exactly:2 partitioning:6 control:4 local:5 tation:1 consequence:1 limit:3 despite:1 vigoda:1 path:2 solely:1 approximately:1 sivakumar:1 black:1 chose:1 initialization:1 studied:1 specifying:1 suggests:1 peakiness:1 range:2 statistically:1 averaged:1 camera:1 mallow:1 recursive:5 block:3 union:1 tribution:1 procedure:9 significantly:6 reject:1 matching:43 inp:1 orm:1 protein:4 get:6 cannot:3 unlabeled:1 undesirable:1 close:1 prentice:1 applying:2 www:1 equivalent:1 deterministic:3 demonstrated:1 map:5 maximizing:1 missing:1 go:2 starting:1 independently:1 guiver:1 splitting:1 insight:1 updated:2 tardos:1 target:13 pt:1 controlling:1 user:1 exact:5 element:1 robertson:1 expensive:1 recognition:1 walking:1 labeled:3 bottom:3 solved:1 wang:2 worst:1 calculate:2 region:1 ensures:1 cycle:1 caetano:2 removed:2 ran:1 mentioned:1 ui:12 insertion:2 complexity:2 flurry:1 renormalized:1 trained:1 depend:1 rewrite:1 bipartite:13 efficiency:1 vjn:1 swap:8 matchings:2 multimodal:2 joint:2 articulated:1 separated:2 distinct:1 forced:1 effective:2 describe:1 monte:3 fast:1 query:6 zemel:4 choosing:2 outside:1 supplementary:2 larger:1 statistic:1 jerrum:1 think:1 ford:1 sequence:3 propose:4 qin:1 relevant:1 loop:1 rapidly:1 flexibility:1 achieve:1 description:1 getting:1 webpage:1 empty:1 produce:5 generating:1 leave:1 converges:1 spent:1 develop:1 ac:1 fixing:1 strong:2 c:2 involves:2 come:1 indicate:1 whyte:1 drawback:1 correct:1 closely:2 material:2 generalization:3 proposition:3 probable:2 summation:2 strictly:1 considered:4 hall:1 guibas:1 plagued:1 exp:26 mapping:4 u3:1 early:2 purpose:1 failing:1 applicable:2 combinatorial:3 currently:1 ross:1 largest:1 create:1 always:2 rather:1 pn:2 avoid:1 conjunction:1 rank:18 indicates:1 inference:8 plackett:11 dependent:3 multiprocessor:1 unary:4 typically:9 unlikely:1 iu:2 compatibility:2 overall:2 classification:1 denoted:1 exponent:1 yahoo:2 plan:1 art:1 initialize:1 field:2 sampling:16 biology:2 placing:3 yu:1 cancel:1 constitutes:1 icml:4 peaked:1 future:1 papadimitriou:1 others:3 report:1 richard:1 inherent:1 thorpe:1 randomly:4 petterson:2 individual:1 phase:1 consisting:1 acceptance:3 dwork:1 highly:3 possibility:1 investigate:1 evaluation:1 alignment:1 behind:2 chain:23 edge:2 partial:11 necessary:2 fox:1 incomplete:3 taylor:2 plackettluce:1 theoretical:4 instance:4 cover:1 zn:1 assignment:36 loopy:1 applicability:1 introducing:2 vertex:1 subset:9 entry:1 uniform:1 inadequate:1 conducted:1 characterize:1 straightforwardly:1 dependency:4 international:1 volkovs:2 stay:1 probabilistic:4 together:1 quickly:3 w1:1 satisfied:1 containing:1 huang:2 slowly:1 sinclair:1 leading:1 actively:1 li:1 account:2 potential:11 includes:1 permanent:7 satisfy:1 mcmc:4 ranking:9 depends:3 later:1 reached:2 start:2 aggregation:2 maintains:1 red:1 parallel:1 bouchard:1 mutation:1 aperiodicity:2 accuracy:1 descriptor:4 efficiently:3 yield:2 accurately:2 produced:5 rejecting:1 lu:1 carlo:3 energy:31 minka:1 proof:1 sampled:1 gain:2 dataset:3 adjusting:1 popular:2 knowledge:4 ubiquitous:1 higher:8 formulation:2 shrink:1 generality:1 furthermore:2 ergodicity:2 stage:1 smola:2 until:3 nif:1 hand:2 hastings:3 web:1 propagation:1 mode:10 quality:3 grows:1 effect:1 true:2 hence:1 assigned:1 symmetric:1 neal:1 illustrated:1 deal:1 interchangeably:1 during:3 numerator:1 skewed:1 self:1 criterion:1 generalized:2 evident:1 complete:1 demonstrate:1 butions:1 theoretic:1 crf:1 motion:2 temperature:3 ranging:4 image:12 variational:2 novel:2 recently:1 common:1 empirically:4 extend:1 approximates:1 refer:2 significant:1 gibbs:12 similarly:2 had:1 reachable:1 chapelle:1 stable:1 longer:2 add:1 optimizing:1 apart:1 reverse:1 underlined:1 outperforming:1 scoring:1 seen:1 guestrin:1 greater:2 additional:2 v3:2 maximize:1 multiple:1 full:2 mix:2 infer:1 smooth:1 technical:1 match:27 faster:2 cross:1 long:1 retrieval:5 equally:1 molecular:1 bigger:1 controlled:1 denominator:1 vision:5 essentially:1 metric:2 proteomics:1 arxiv:1 iteration:5 normalization:3 robotics:2 cell:1 proposal:13 whereas:1 interval:1 crucial:2 allocated:3 unlike:2 strict:1 simulates:1 jordan:1 structural:1 intermediate:1 exceed:1 easy:1 wn:1 pmatch:1 beset:1 affect:3 inner:1 idea:1 avenue:1 luce:11 gb:10 ultimate:1 stereo:1 dellaert:2 cause:1 pretraining:1 deep:1 boutilier:1 useful:2 generally:1 detailed:3 involve:2 factorial:1 extensively:1 locally:1 concentrated:2 reduced:1 generate:10 specifies:1 problematic:1 dotted:1 trapped:1 estimated:1 per:1 discrete:1 express:1 key:1 achieving:1 iros:1 utilize:1 backward:1 v1:4 graph:5 sum:2 realworld:1 run:9 uncertainty:1 soda:1 place:1 throughout:1 family:1 vn:2 draw:2 bmp:15 summarizes:1 scaling:1 comparable:2 bound:1 followed:1 correspondence:6 cheng:1 constraint:1 flat:1 sake:1 u1:2 aspect:1 fourier:1 extremely:1 min:1 kumar:1 relatively:1 gpus:1 combination:1 poor:1 across:2 increasingly:1 em:1 metropolis:3 making:1 taken:1 computationally:1 equation:8 vjt:1 remains:1 turn:1 eventually:1 tractable:2 end:3 generalizes:1 available:2 apply:1 v2:2 rp:14 top:2 clustering:1 running:4 remaining:2 denotes:1 ensure:1 cf:9 uj:2 especially:2 unchanged:1 move:5 added:2 already:1 flipping:4 primary:1 gradient:2 distance:7 unable:4 link:1 thrun:1 majority:1 restart:1 parametrized:3 considers:1 reason:1 assuming:1 code:1 length:1 index:1 modeled:1 illustration:1 relationship:1 balance:3 difficult:2 unfortunately:1 sharper:1 trace:1 negative:1 zt:6 boltzmann:1 unknown:1 perform:1 allowing:1 upper:1 softrank:1 observation:1 markov:3 sm:13 situation:1 extended:1 precise:1 frame:1 steiglitz:1 arbitrary:4 sharp:1 jebara:1 inferred:2 introduced:1 pair:5 required:1 extensive:1 z1:2 established:1 nip:3 able:2 proceeds:2 regime:1 challenge:1 including:1 green:1 video:1 belief:1 suitable:1 natural:2 treated:1 ranked:1 ramos:2 representing:1 interdependent:1 determining:1 maksims:1 law:1 loss:1 permutation:31 limitation:1 contingency:1 consistent:2 share:1 row:5 prone:1 repeat:3 placed:1 dis:1 allow:1 wide:2 fall:1 taking:1 overcome:1 world:1 valid:11 rich:2 qn:1 ignores:1 transition:1 made:2 forward:1 evaluating:1 avoids:1 author:1 stuck:1 adaptive:1 social:1 approximate:6 keep:1 global:2 sequentially:2 summing:1 un:1 continuous:1 table:4 learn:1 zk:1 terminate:1 robust:1 domain:3 vj:29 did:1 giraffe:2 main:1 aistats:1 linearly:1 terminated:2 arrow:2 dense:1 arise:1 repeated:2 fair:1 board:1 slow:1 wiley:1 position:3 exponential:1 durrant:1 minute:2 specific:3 sift:2 experimented:2 normalizing:1 intractable:4 essential:1 exists:1 trap:1 sequential:12 effectively:2 adding:1 valiant:1 conditioned:2 illustrates:1 rejection:2 entropy:1 likely:5 explore:1 expressed:3 u2:1 chang:1 restarted:1 corresponds:2 satisfies:2 extracted:1 conditional:4 goal:5 formulated:2 viewed:1 consequently:2 towards:1 replace:1 feasible:1 hard:1 change:8 typical:1 specifically:2 sampler:50 miss:1 total:2 accepted:3 experimental:1 formally:1 pfor:1 scan:1 requisite:1 tested:1 |
3,857 | 4,492 | Learning Halfspaces with the Zero-One Loss:
Time-Accuracy Tradeoffs
Aharon Birnbaum and Shai Shalev-Shwartz
School of Computer Science and Engineering
The Hebrew University
Jerusalem, Israel
Abstract
Given ?, ?, we study the time complexity required to improperly learn a halfspace with misclassification error rate of at most (1 + ?) L?? + ?, where L?? is the
optimal ?-margin error rate. For ? = 1/?, polynomial time and sample complexity is achievable using the hinge-loss. For ? = 0, Shalev-Shwartz et al.
[2011] showed that poly(1/?) time is impossible, while learning is possible in
?
time exp(O(1/?)).
An immediate question, which this paper tackles, is what is
achievable if ? ? (0, 1/?). We derive positive results interpolating between the
polynomial time for ? = 1/? and the exponential time for ? = 0. In particular,
we show that there are cases in which ? = o(1/?) but the problem is still solvable
in polynomial time. Our results naturally extend to the adversarial online learning
model and to the PAC learning with malicious noise model.
1
Introduction
Some of the most influential machine learning tools are based on the hypothesis class of halfspaces
with margin. Examples include the Perceptron [Rosenblatt, 1958], Support Vector Machines [Vapnik, 1998], and AdaBoost [Freund and Schapire, 1997]. In this paper we study the computational
complexity of learning halfspaces with margin.
A halfspace is a mapping h(x) = sign(?w, x?), where w, x ? X are taken from the unit ball of an
RKHS (e.g. Rn ), and ?w, x? is their inner-product. Relying on the kernel trick, our sole assumption
on X is that we are able to calculate efficiently the inner-product between any two instances (see
for example Sch?olkopf and Smola [2002], Cristianini and Shawe-Taylor [2004]). Given an example
(x, y) ? X ? {?1} and a vector w, we say that w errs on (x, y) if y?w, x? ? 0 and we say that w
makes a ?-margin error on (x, y) if y?w, x? ? ?.
The error rate of a predictor h : X ? {?1} is defined as L01 (h) = P[h(x) ?= y], where the
probability is over some unknown distribution over X ?{?1}. The ?-margin error rate of a predictor
x 7? ?w, x? is defined as L? (w) = P[y?w, x? ? ?]. A learning algorithm A receives an i.i.d.
training set S = (x1 , y1 ), . . . , (xm , ym ) and its goal is to return a predictor, A(S), whose error rate
is small. We study the runtime required to learn a predictor such that with high probability over the
choice of S, the error rate of the learnt predictor satisfies
L01 (A(S)) ? (1 + ?) L?? + ? where L?? =
min
w:?w?=1
L? (w) .
(1)
There are three parameters of interest: the margin parameter, ?, the multiplicative approximation
factor parameter, ?, and the additive error parameter ?.
From the statistical perspective (i.e., if we allow exponential runtime), Equation (1) is achievable
with ? = 0 by letting A be the algorithm which minimizes the number of margin errors over the
1
? 21 2 ). See
training set subject to a norm constraint on w. The sample complexity of A is m = ?(
? ?
for example Cristianini and Shawe-Taylor [2004].
If the data is separable with margin (that is, L?? = 0), then the aforementioned A can be implemented
in time poly(1/?, 1/?). However, the problem is much harder in the agnostic case, namely, when
L?? > 0 and the distribution over examples can be arbitrary.
Ben-David and Simon [2000] showed that, no proper learning algorithm can satisfy Equation (1)
with ? = 0 while running in time polynomial in both 1/? and 1/?. By ?proper? we mean an algorithm which returns a halfspace predictor. Shalev-Shwartz et al. [2011] extended this results to improper learning?that is, when A(S) should satisfy Equation (1) but is not required to
)
( be a halfspace.
They also derived an algorithm that satisfies Equation (1) and runs in time exp C ?1 log( ?1? ) ,
where C is a constant.
Most algorithms that are being used in practice minimize a convex surrogate loss. That is, in?
stead
of minimizing the number of mistakes on the training set, the algorithms minimize L(w)
=
?m
1
i=1 ?(yi ?w, xi ?), where ? : R ? R is a convex function that upper bounds the 0 ? 1 loss.
m
For example, the Support Vector Machine (SVM) algorithm relies on the hinge loss. The advantage
of surrogate convex losses is that minimizing them can be performed in time poly(1/?, 1/?). It is
?
easy to verify that minimizing L(w)
with respect to the hinge loss yields a predictor that satisfies
Equation (1) with ? = ?1 . Furthermore, Long and Servedio [2011], Ben-David et al. [2012] have
(
)
shown that any convex surrogate loss cannot guarantee Equation (1) if ? < 12 ?1 ? 1 .
Despite the centrality of this problem, not much is known on the runtime required to guarantee
Equation (1) with other values of ?. In particular, a natural question is how the runtime changes
when enlarging ? from 0 to ?1 . Does it change gradually or perhaps there is a phase transition?
Our main contribution is an upper bound on the required runtime as a function of ?. For any ? between1 5 and ?1 , let ? = ?1? . We show that the runtime required to guarantee Equation (1) is at most
exp (C ? min{?, log(1/?)}), where C is a universal constant (we ignore additional factors which
are polynomial in 1/?, 1/??see a precise statement with the exact constants in Theorem 1). That
is, when we enlarge ?, the runtime decreases gradually from being exponential to being polynomial.
Furthermore, we show that the algorithm which yields the aforementioned bound is a vanilla SVM
with a specific kernel. We also show how one can design specific kernels that will fit well certain
values of ? while minimizing our upper bound on the sample and time complexity.
In Section 4 we extend our results to the more challenging learning settings of adversarial online
learning and PAC learning with malicious noise. For both cases, we obtain similar upper bounds
on the runtime as a function of ?. The technique we use in the malicious noise case may be of
independent interest.
?
An interesting special case is when ? = ? 1
. In this case, ? = log(1/?) and hence the
?
log(1/?)
runtime is still polynomial in 1/?. This recovers a recent result of Long and Servedio [2011]. Their
technique is based on a smooth boosting algorithm applied on top of a weak learner which constructs
random halfspaces and takes their majority vote. Furthermore, Long and Servedio emphasize that
their algorithm is not based on convex optimization. They show that no convex surrogate can obtain
? = o(1/?). As mentioned before, our technique is rather different as we do rely on the hinge
loss as a surrogate convex loss. There is no contradiction to Long and Servedio since we apply the
convex loss in the feature space induced by our kernel function. The negative result of Long and
Servedio holds only if the convex surrogate is applied on the original space.
1
We did not analyze the case ? < 5 because the runtime is already exponential in 1/? even when ? = 5.
Note, however, that our bound for ? = 5 is slightly better than the bound of Shalev-Shwartz et al. [2011]
for ? = 0 because our bound does not involve the parameter ? in the exponent while their bound depends on
exp(1/? log(1/(??))).
2
1.1
Additional related work
The problem of learning kernel-based halfspaces has been extensively studied before in the framework of SVM [Vapnik, 1998, Cristianini and Shawe-Taylor, 2004, Sch?olkopf and Smola, 2002] and
the Perceptron [Freund and Schapire, 1999]. Most algorithms replace the 0-1 error function with a
convex surrogate. As mentioned previously,
et al. [2012] have shown that this approach
( Ben-David
)
leads to approximation factor of at least 12 ?1 ? 1 .
There has been several works attempting to obtain efficient algorithm for the case ? = 0 under
certain distributional assumptions. For example, Kalai et al. [2005], Blais et al. [2008] have shown
that if the marginal data distribution over X is a product distribution, then it is possible to satisfy
4
Equation (1) with ? = ? = 0, in time poly(n1/? ). Klivans et al. [2009] derived similar results for
the case of malicious noise. Another distributional assumption is on the conditional probability of
the label given the instance. For example, Kalai and Sastry [2009] solves the problem in polynomial
time if there exists a vector w and a monotonically non-increasing function ? such that P(Y =
1|X = x) = ?(?w, x?).
Zhang [2004], Bartlett et al. [2006] also studied the relationship between surrogate convex loss
functions and the 0-1 loss function. They introduce the notion of well calibrated loss functions,
meaning that the excess risk of a predictor h (over the Bayes optimal) with respect to the 0-1 loss
can be bounded using the excess risk of the predictor with respect to the surrogate loss. It follows that
if the latter is close to zero than the former is also close to zero. However, as Ben-David et al. [2012]
show in detail, without making additional distributional assumptions the fact that a loss function is
well calibrated does not yield finite-sample or finite-time bounds.
In terms of techniques, our Theorem 1 can be seen as a generalization of the positive result given
in Shalev-Shwartz et al. [2011]. While Shalev-Shwartz et al. only studied the case ? = 0, we are
interested in understanding the whole curve of runtime as a function of ?. Similar to the analysis of
Shalev-Shwartz et al., we approximate the sigmoidal and erf transfer functions using polynomials.
However, we need to break symmetry in the definition of the exact transfer function to approximate.
The main technical observation is that the Lipschitz constant of the transfer functions we approximate does not depend on ?, and is roughly 1/? no matter what ? is. Instead, the change of the
transfer function when ? is increasing is in higher order derivatives.
To the best of our knowledge, the only middle point on the curve that has been studied before is the
case ? = ? 1
, which was analyzed in Long and Servedio [2011]. Our work shows an upper
?
log(1/?)
bound on the entire curve. Besides that, we also provide a recipe for constructing better kernels for
specific values of ?.
2
Main Results
Our main result is an upper bound on the time and sample complexity for all values of ? between
5 and 1/?. The bounds we derive hold for a norm-constraint form of SVM with a specific kernel,
which we recall now. Given a training set S = (x1 , y1 ), . . . , (xm , ym ), and a feature mapping
? : X ? X ? , where X ? is the unit ball of some Hilbert space, consider the following learning rule:
m
?
argmin
max{0, 1 ? yi ?v, ?(xi )?} .
(2)
v:?v?2 ?B i=1
Using the well known kernel-trick, if K(x, x? ) implements the inner product ??(x), ?(x? )?, and
G is ?
an m ? m matrix with Gi,j = K(xi , xj ), then we can write a solution of Equation (2) as
v = i ai ?(xi ) where the vector a ? Rm is a solution of
m
?
argmin
max{0, 1 ? yi (Ga)i } .
(3)
a:aT Ga?B i=1
The above is a convex optimization problem in m variables and can be solved in time poly(m).
Given a solution a ? Rm , we define a classifier ha : X ? {?1} to be
(m
)
?
ha (x) = sign
ai K(xi , x) .
(4)
i=1
3
The upper bounds we derive hold for the above kernel-based SVM with the kernel function
K(x, x? ) =
1
1?
1
?
2 ?x, x ?
.
(5)
We are now ready to state our main theorem.
Theorem 1 For any ? ? (0, 1/2) and ? ? 5, let ? = ?1? and let
{
(
)
)}
2
2
1 (
B = min 4?2 96? 2 + e18? log(8? ? )+5 , 2 0.06 e4? + 3
?
min{18? log(8? ?2 ) , 4? 2 }
= poly(1/?) ? e
.
Fix ?, ? ? (0, 1/2) and let m be a training set size that satisfies
16
max{2B, (1 + ?)2 log(2/?)} .
?2
Let A be the algorithm which solves Equation (3) with the kernel function given in Equation (5),
and returns the predictor defined in Equation (4). Then, for any distribution, with probability of at
least 1 ? ?, the algorithm A satisfies Equation (1).
m ?
The proof of the theorem is given in the next section. As a direct corollary we obtain that there is an
efficient algorithm that achieves an approximation factor of ? = o(1/?):
Corollary 2 For any ?, ?, ? ? (0, 1), let ? = ?
1/?
log(1/?)
and let B =
0.06
?6
+
3
?2 .
Then, with m, A
being as defined in Theorem 1, the algorithm A satisfies Equation (1).
As another corollary of Theorem 1 we obtain that for any constant c ? (0, 1), it is possible to satisfy
Equation (1) with ? = c/? in polynomial time. However, the dependence of the runtime on the
2
constant c is e4/c . For example, for c = 1/2 we obtain the multiplicative factor e16 ? 8, 800, 000.
Our next contribution is to show that a more careful design of the kernel function can yield better
bounds.
?d
Theorem 3 For any ?, ?, let p be a polynomial of the form p(z) = j=1 ?j z 2j?1 (namely, p is
odd) that satisfies
max |p(z)| ? ? and
min |p(z)| ? 1 .
z?[?1,1]
z:|z|??
Let m be a training set size that satisfies
16
max{???21 , 2 log(4/?), (1 + ?)2 log(2/?)}
?2
Let A be the algorithm which solves Equation (3) with the following kernel function
m ?
K(x, x? ) =
d
?
|?j |(?x, x? ?)2j?1 ,
j=1
and returns the predictor defined in Equation (4). Then, for any distribution, with probability of at
least 1 ? ?, the algorithm A satisfies Equation (1).
The above theorem provides us with a recipe for constructing good kernel functions: Given ? and ?,
?d
find a vector ? with minimal ?1 norm such that the polynomial p(z) = j=1 ?j z 2j?1 satisfies the
conditions given in Theorem 3. For a fixed degree d, this can be written as the following optimization
problem:
min ???1 s.t. ?x ? [0, 1], p(z) ? ? ? ?z ? [?, 1], p(z) ? 1 .
??Rd
(6)
Note that for any x, the expression p(x) is a linear function of ?. Therefore, the above problem is
a linear program with an infinite number of constraints. Nevertheless, it can be solved efficiently
using the Ellipsoid algorithm. Indeed, for any ?, we can find the extreme points of the polynomial
4
it defines, and then determine whether ? satisfies all the constraints or, if it doesn?t, we can find a
violated constraint.
To demonstrate how Theorem 3 can yield a better guarantee (in terms of the constants), we solved
Equation (6) for the simple case of d = 2. For this simple case, we can provide an analytic solution
to Equation (6), and based on this solution we obtain the following lemma whose proof is provided
in the appendix.
Lemma 4 Given ? < 2/3, consider the polynomial p(z) = ?1 z + ?2 z 3 , where
?1 =
1
?
+
?
1+?
1
, ?2 = ? ?(1+?)
.
Then, p satisfies the conditions of Theorem 3 with
?=
Furthermore, ???1 ?
2
?
?2
3 3?
+
?2
3
? 0.385 ?
1
?
+ 1.155 .
+ 1.
It is interesting to compare the guarantee given in the above lemma to the guarantee of using the
vanilla hinge-loss. For both cases the sample complexity is order of ? 21?2 . For the vanilla hingeloss we obtain the approximation factor ?1 while for the kernel given in Lemma 4 we obtain the
approximation factor of ? ? 0.385 ? ?1 + 1.155. Recall that Ben-David et al. [2012] have shown that
without utilizing kernels, no convex surrogate loss can guarantee an approximation factor smaller
than ? < 12 ( ?1 ? 1). The above discussion shows that applying the hinge-loss with a kernel function
can break this barrier without a significant increase in runtime2 or sample complexity.
3
Proofs
Given a scalar loss function ? : R ? R, and a vector w, we denote by L(w) = E(x,y)?D [?(y?w, x?)]
the expected loss value of the predictions of w with respect to a distribution D over X ? {?1}.
?m
1
?
Given a training set S = (x1 , y1 ), . . . , (xm , ym ), we denote by L(w)
= m
i=1 ?(yi ?w, xi ?)
the empirical loss of w. We slightly overload our notation and also use L(w) to denote
E(x,y)?D [?(y?w, ?(x)?)], when w is an element of an RKHS corresponding to the mapping ?. We
?
define L(w)
analogously.
We will make extensive use of the following loss functions: the zero-one loss, ?01 (z) = 1[z ? 0], the
?-zero-one loss, ?? (z) = 1[z ? ?], the hinge-loss, ?h (z) = [1?z]+ = max{0, 1?z}, and the ramploss, ?ramp (z) = min{1, ?h (z)}. We will use L01 (w), L? (w), Lh (w), and Lramp (w) to denote
? 01 (w), L
? ? (w), L
? h (w), and
the expectations with respect to the different loss functions. Similarly L
?
Lramp (w) are the empirical losses of w with respect to the different loss functions.
Recall that we output a vector v that solves Equation (3). This vector is in the RKHS corresponding
to the kernel given in Equation (5). Let Bx = maxx?X K(x, x) ? 2. Since the ramp-loss upper
bounds the zero-one loss we have that L01 (v) ? Lramp (v). The advantage of using the ramp loss is
that it is both a Lipschitz function and it is bounded by 1. Hence, standard Rademacher generalization analysis (e.g. Bartlett and Mendelson [2002], Bousquet [2002]) yields that with probability of
at least 1 ? ?/2 over the choice of S we have:
?
?
Bx B
2 ln(4/?)
?
+
Lramp (v) ? Lramp (v) + 2
.
(7)
m
m
|
{z
}
=?1
Since the ramp loss is upper bounded by the hinge-loss, we have shown the following inequalities,
? ramp (v) + ?1 ? L
? h (v) + ?1 .
L01 (v) ? Lramp (v) ? L
(8)
Next, we rely on the following claim adapted from [Shalev-Shwartz et al., 2011, Lemma 2.4]:
2
It should be noted that solving SVM with kernels takes more time than solving a linear SVM. Hence, if
the original instance space is a low dimensional Euclidean space we loose polynomially in the time complexity.
However, when the original instance space is also an RKHS, and our kernel is composed on top of the original
kernel, the increase in the time complexity is not significant.
5
??
??
Claim 5 Let p(z) = j=0 ?j z j be any polynomial that satisfies j=0 ?j2 2j ? B, and let w be
any vector in X . Then, there exists vw in the RKHS defined by the kernel given in Equation (5), such
that ?vw ?2 ? B and for all x ? X , ?vw , ?(x)? = p(?w, x?).
? p be defined analogously. If p is an odd
For any polynomial p, let ?p (z) = ?h (p(z)), and let L
polynomial, we have that ?p (y?w, x?) = [1 ? yp(?w, x?)]+ . By the definition of v as minimizing
? h (v) over ?v?2 ? B, it follows from the above claim that for any odd p that satisfies ?? ? 2 2j ?
L
j=0 j
B and for any w? ? X, we have that
? h (v) ? L
? h (vw? ) = L
? p (w? ) .
L
Next, it is straightforward to verify that if p is an odd polynomial that satisfies:
max |p(z)| ? ?
z?[?1,1]
and
min p(z) ? 1
z?[?,1]
(9)
? p (w? ) ?
then, ?p (z) ? (1 + ?)?? (z) for all z ? [?1, 1]. For such polynomials, we have that L
? ? (w? ). Finally, by Hoeffding?s inequality, for any fixed w? , if m > log(2/?)
(1 + ?)L
, then with
?22
probability of at least 1 ? ?/2 over the choice of S we have that
? ? (w? ) ? L? (w? ) + ?2 .
L
So, overall, we have obtained that with probability of at least 1 ? ?,
L01 (v) ? (1 + ?) L? (w? ) + (1 + ?)?2 + ?1 .
Choosing m large enough so that (1 + ?)?2 + ?1 ? ?, we obtain:
?
Corollary 6 Fix ?, ?, ? ? (0, 1) and ? > 0. Let p be an odd polynomial such that j ?j2 2j ? B
and such that Equation (9) holds. Let m be a training set size that satisfies:
16
m ? 2 ? max{2B, 2 log(4/?), (1 + ?)2 log(2/?)} .
?
Then, with probability of at least 1??, the solution of Equation (3) satisfies, L01 (v) ? (1+?)L?? +?.
The proof of Theorem 1 follows immediately from the above corollary together with the following
two lemmas, whose proofs are provided in the appendix.
(
)
2
1
Lemma 7 For any ? > 0 and ? > 2, let ? = ??
and let B = ?12 0.06 e4? + 3 . Then, there
exists a polynomial that satisfies the conditions of Corollary 6 with the parameters ?, ?, B.
1
Lemma 8 For any ? ? (0, 1/2) and ? ? [5, ?1 ], let ? = ??
and let B =
(
(
(
)
))
2
2
2
4? 96? + exp 18? log 8? ? + 5 . Then, there exists a polynomial that satisfies the conditions of Corollary 6 with the parameters ?, ?, B.
3.1
Proof of Theorem 3
The proof is similar to the proof of Theorem 1 except that we replace Claim 5 with the following:
?d
Lemma 9 Let p(z) = j=1 ?j z 2j?1 be any polynomial, and let w be any vector in X . Then, there
exists vw in the RKHS defined by the kernel given in Theorem 3, such that ?vw ?2 ? ???1 and for
all x ? X , ?vw , ?(x)? = p(?w, x?).
Proof We start with an explicit definition of the mapping ?(x) corresponding to the kernel in the
j
theorem. The coordinates of ?(x) are
?indexed by tuples (k1 , . . . , kj ) ? [n] for j = 1, 3, . . . , 2d?1.
Coordinate (k1 , . . . , kj ) equals to |?j |xk1 xk2 . . . xkj . Next, for any w ? X , we define explicitly the
? vector vw for which ?vw , ?(x)? = p(?w, x?). Coordinate (k1 , . . . kj ) of vw equals to
sign(?j ) |?j |wk1 wk2 . . . wkj . It is easy to verify that indeed ?vw ?2 ? ???1 and for all x ? X ,
?vw , ?(x)? = p(?w, x?).
Since for any x ? X we also have that K(x, x) ? ???1 , the proof of Theorem 3 follows using the
same arguments as in the proof of Theorem 1.
6
4
Extension to other learning models
In this section we briefly describe how our results can be extended to adversarial online learning and
to PAC learning with malicious noise. We start with the online learning model.
4.1
Online learning
Online learning is performed in a sequence of consecutive rounds, where at round t the learner is
given an instance, xt ? X , and is required to predict its label. After predicting y?t , the target label,
yt , is revealed. The goal of the learner is to make as few prediction mistakes as possible. See for
example Cesa-Bianchi and Lugosi [2006].
A classic online classification algorithm is the Perceptron [Rosenblatt, 1958]. The Perceptron maintains a vector wt and predicts according to y?t = sign(?wt , xt ?). Initially, w1 = 0, and at round t
the Perceptron updates the vector using the rule wt+1 = wt + 1[?
yt ?= yt ] yt xt . Freund and Schapire
[1999] observed that the Perceptron can also be implemented efficiently in an RKHS using a kernel
function.
Agmon [1954] and others have shown that if there exists w? such that for all t, yt ?w? , xt ? ? 1 and
?xt ?2 ? Bx , then the Perceptron will make at most ?w? ?2 Bx prediction mistakes. This bound
holds without making any additional distributional assumptions on the sequence of examples.
This mistake bound has been generalized to the noisy case (see for example Gentile [2003])
as ?
follows. Given a sequence (x1 , y1 ), . . . , (xm , ym ), and a vector w? , let Lh (w? ) =
m
1
?
t=1 ?h (yt ?w , xt ?), where ?h is the hinge-loss. Then, the average number of prediction mism
takes the Perceptron will make on this sequence is at most
?
m
1 ?
Bx ?w? ?2 Lh (w? ) Bx ?w? ?2
?
1[?
yt ?= yt ] ? Lh (w ) +
+
.
(10)
m t=1
m
m
?m
1
?
Let L? (w? ) =( m
t=1
) 1(yt ?w , xt ? ? ?). Trivially, Equation (10) can yield a bound whose
leading term is 1 + ?1 L? (w? ) (namely, it corresponds to ? = ?1 ). On the other hand, Ben-David
et al. [2009] have derived a mistake bound whose leading term depends on L? (w? ) (namely, it
2
corresponds to ? = 0), but the runtime of the algorithm is at least m1/? . The main result of this
section is to derive a mistake bound for the Perceptron based on all values of ? between 5 and 1/?.
Theorem 10 For any ? ? (0, 1/2) and ? ? 5, let ? = ?1? and let B?,? be the value of B as
defined in Theorem 1. Then, for any sequence (x1 , y1 ), . . . , (xm , ym ), if the Perceptron is run on
this sequence using the kernel function given in Equation (5), the average number of prediction
mistakes it will make is at most:
?
2B?,? (1 + ?)L? (w? ) 2B?,?
?
min ?
(1 + ?)L? (w ) +
+
m
m
??(0,1/2),??5,w ?X
Proof [sketch] Equation (10) holds if we implement the Perceptron using the kernel function given
in Equation (5), for which Bx = 2. Furthermore, similarly to the proof of Theorem 1, for any
polynomial p that satisfies the conditions of Corollary 6 we have that there exists v ? in the RKHS
corresponding to the kernel, with ?v ? ?2 ? B and with Lh (v ? ) ? (1 + ?)L? (w? ). The theorem
follows.
4.2
PAC learning with malicious noise
In this model, introduced by Valiant [1985] and specified to the case of halfspaces with margin
by Servedio [2003], Long and Servedio [2011], there is an unknown distribution over instances
in X and there is an unknown target vector w? ? X such that |?w? , x?| ? ? with probability 1.
The learner has an access to an example oracle. At each query to the oracle, with probability of
1 ? ? it samples a random example x ? X according to the unknown distribution over X , and
7
returns (x, sign(?w? , x?)). However, with probability ?, the oracle returns an arbitrary element of
X ? {?1}. The goal of the learner is to output a predictor that has L01 (h) ? ?, with respect to the
?clean? distribution.
Auer and Cesa-Bianchi [1998] described a general conversion from online learning to the malicious
noise setting. Servedio [2003] used this conversion to derive a bound based on the Perceptron?s
mistake bound. In our case, we cannot rely on the conversion of Auer and Cesa-Bianchi [1998]
since it requires a proper learner, while the online learner described in the previous section is not
proper.
Instead, we propose the following simple algorithm. First, sample m examples. Then, solve kernel
SVM on the resulting noisy training set.
Theorem 11 Let ? ? (0, 1/4), ? ? (0, 1/2), and ? > 5.
{ Let B be as2 defined in
} Theorem 1.
Let m be a training set size that satisfies: m ? 64
?
max
2B
,
(2
+
?)
log(1/?)
. Then, with
?2
probability of at least 1 ? 2?, the output of kernel-SVM on the noisy training set, denoted h, satisfies
?
L01 (h) ? (2 + ?)? + ?/2. It follows that if ? ? 2(2+?)
then L01 (h) ? ?.
Proof Let S? be a training set in which we replace the noisy examples with clean iid examples. Let
? denotes the empirical loss over S? and L
? denotes the empirical loss over S. As in the proof of
L
Theorem 1, we have that w.p. of at least 1 ? ?, for any v in the RKHS corresponding to the kernel
that satisfies ?v?2 ? B we have that:
? ramp (v) + 3?/8 ,
L01 (v) ? L
(11)
by our assumption on m. Let ?? be the fraction of noisy examples in S. Note that S? and S differ in
at most m?
? elements. Therefore, for any v,
? ramp (v) ? L
? ramp (v) + ?? .
L
(12)
? h , let w? be the target vector in the original space (i.e., the one which
Now, let v be the minimizer of L
achieves correct predictions with margin ? on clean examples), and let vw? be its corresponding
element in the RKHS (see Claim 5). We have
? ramp (v) ? L
? h (v) ? L
? h (vw? ) = L
? p (w? ) ? (1 + ?)L
? ? (w? ) ? (1 + ?)?
L
?.
(13)
In the above, the first inequality is since the ramp loss is upper bounded by the hinge loss, the second
inequality is by the definition of v, the third equality is by Claim 5, the fourth inequality is by the
properties of p, and the last inequality follows from the definition of ??. Combining the above yields,
L01 (v) ? (2 + ?)?
? + 3?/8 .
Finally, using Hoefding?s inequality, we know that for the definition of m, with probability of at
?
least 1 ? ? we have that ?? ? ? + 8(2+?)
. Applying the union bound and combining the above we
conclude that with probability of at least 1 ? 2?, L01 (v) ? (2 + ?)? + ?/2.
5
Summary and Open Problems
We have derived upper bounds on the time and sample complexities as a function of the approximation factor. We further provided a recipe for designing kernel functions with a small time and sample
complexity for any given value of approximation factor and margin. Our results are applicable to
agnostic PAC Learning, online learning, and PAC learning with malicious noise.
An immediate open question is whether our results can be improved. If not, can computationally
hardness results be formally established. Another open question is whether the upper bounds we
have derived for an improper learner can be also derived for a proper learner.
Acknowledgements: This work is supported by the Israeli Science Foundation grant number 59810 and by the German-Israeli Foundation grant number 2254-2010. Shai Shalev-Shwartz is incumbent of the John S. Cohen Senior Lectureship in Computer Science.
8
References
S. Agmon. The relaxation method for linear inequalities. Canadian Journal of Mathematics, 6(3):382?392,
1954.
P. Auer and N. Cesa-Bianchi. On-line learning with malicious noise and the closure algorithm. Annals of
mathematics and artificial intelligence, 23(1):83?99, 1998.
P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results.
Journal of Machine Learning Research, 3:463?482, 2002.
P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the
American Statistical Association, 101:138?156, 2006.
S. Ben-David and H. Simon. Efficient learning of linear perceptrons. In NIPS, 2000.
S. Ben-David, D. Pal, , and S. Shalev-Shwartz. Agnostic online learning. In COLT, 2009.
S. Ben-David, D. Loker, N. Srebro, and K. Sridharan. Minimizing the misclassification error rate using a
surrogate convex loss. In ICML, 2012.
E. Blais, R. O?Donnell, and K Wimmer. Polynomial regression under arbitrary product distributions. In COLT,
2008.
O. Bousquet. Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of Learning
Algorithms. PhD thesis, Ecole Polytechnique, 2002.
N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
N. Cristianini and J. Shawe-Taylor. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning,
37(3):277?296, 1999.
Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to
boosting. Journal of Computer and System Sciences, 55(1):119?139, August 1997.
C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3):265?299, 2003.
A. Kalai, A.R. Klivans, Y. Mansour, and R. Servedio. Agnostically learning halfspaces. In Proceedings of the
46th Foundations of Computer Science (FOCS), 2005.
A.T. Kalai and R. Sastry. The isotron algorithm: High-dimensional isotonic regression. In Proceedings of the
22th Annual Conference on Learning Theory, 2009.
A.R. Klivans, P.M. Long, and R.A. Servedio. Learning halfspaces with malicious noise. The Journal of Machine
Learning Research, 10:2715?2740, 2009.
P.M. Long and R.A. Servedio. Learning large-margin halfspaces with more malicious noise. In NIPS, 2011.
F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain.
Psychological Review, 65:386?407, 1958. (Reprinted in Neurocomputing (MIT Press, 1988).).
B. Sch?olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization
and Beyond. MIT Press, 2002.
R.A. Servedio. Smooth boosting and learning with malicious noise. Journal of Machine Learning Research, 4:
633?648, 2003.
S. Shalev-Shwartz, O. Shamir, and K. Sridharan. Learning kernel-based halfspaces with the 0-1 loss. SIAM
Journal on Computing, 40:1623?1646, 2011.
L. G. Valiant. Learning disjunctions of conjunctions. In Proceedings of the 9th International Joint Conference
on Artificial Intelligence, pages 560?566, August 1985.
V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization.
The Annals of Statistics, 32:56?85, 2004.
9
| 4492 |@word briefly:1 middle:1 achievable:3 polynomial:25 norm:4 open:3 closure:1 harder:1 ecole:1 rkhs:10 written:1 john:1 additive:1 analytic:1 update:1 intelligence:2 provides:1 boosting:3 sigmoidal:1 zhang:2 direct:1 focs:1 introduce:1 expected:1 hardness:1 behavior:1 indeed:2 roughly:1 brain:1 relying:1 increasing:2 provided:3 bounded:4 notation:1 agnostic:3 israel:1 what:2 argmin:2 lramp:6 minimizes:1 guarantee:7 tackle:1 runtime:13 rm:2 classifier:1 unit:2 grant:2 mcauliffe:1 positive:2 before:3 engineering:1 mistake:8 despite:1 lugosi:2 studied:4 wk2:1 challenging:1 practice:1 union:1 implement:2 universal:1 empirical:5 maxx:1 cannot:2 close:2 ga:2 storage:1 risk:5 impossible:1 applying:2 isotonic:1 yt:9 jerusalem:1 straightforward:1 convex:15 immediately:1 contradiction:1 rule:2 utilizing:1 classic:1 notion:1 coordinate:3 annals:2 target:3 shamir:1 exact:2 designing:1 hypothesis:1 trick:2 element:4 distributional:4 predicts:1 observed:1 solved:3 calculate:1 improper:2 decrease:1 halfspaces:10 mentioned:2 convexity:1 complexity:13 cristianini:4 depend:1 solving:2 learner:9 joint:1 describe:1 query:1 artificial:2 shalev:11 choosing:1 disjunction:1 whose:5 solve:1 say:2 ramp:10 stead:1 erf:1 gi:1 statistic:1 noisy:5 online:11 advantage:2 sequence:6 propose:1 product:5 j2:2 combining:2 olkopf:3 recipe:3 rademacher:2 ben:9 derive:5 school:1 odd:5 sole:1 solves:4 implemented:2 differ:1 correct:1 fix:2 generalization:3 extension:1 hold:6 exp:5 mapping:4 predict:1 claim:6 achieves:2 consecutive:1 xk2:1 agmon:2 applicable:1 label:3 tool:1 minimization:1 mit:2 gaussian:1 rather:1 kalai:4 conjunction:1 corollary:8 derived:6 adversarial:3 entire:1 initially:1 interested:1 overall:1 aforementioned:2 classification:4 colt:2 denoted:1 exponent:1 special:1 marginal:1 equal:2 construct:1 enlarge:1 icml:1 others:1 few:1 composed:1 neurocomputing:1 phase:1 isotron:1 n1:1 organization:1 interest:2 lectureship:1 analyzed:1 extreme:1 lh:5 indexed:1 taylor:4 euclidean:1 minimal:1 psychological:1 instance:6 predictor:12 pal:1 learnt:1 calibrated:2 incumbent:1 international:1 siam:1 donnell:1 probabilistic:1 ym:5 analogously:2 together:1 w1:1 thesis:1 cesa:5 hoeffding:1 american:1 derivative:1 leading:2 return:6 bx:7 yp:1 matter:1 satisfy:4 explicitly:1 depends:2 multiplicative:2 performed:2 break:2 analyze:1 start:2 bayes:1 maintains:1 shai:2 simon:2 halfspace:4 contribution:2 minimize:2 accuracy:1 efficiently:3 yield:8 weak:1 iid:1 definition:6 servedio:13 naturally:1 proof:15 recovers:1 recall:3 knowledge:1 hilbert:1 auer:3 higher:1 adaboost:1 improved:1 furthermore:5 xk1:1 smola:3 hand:1 receives:1 sketch:1 defines:1 perhaps:1 verify:3 former:1 hence:3 equality:1 regularization:1 round:3 game:1 noted:1 generalized:1 theoretic:1 demonstrate:1 polytechnique:1 meaning:1 xkj:1 cohen:1 extend:2 association:1 m1:1 significant:2 cambridge:2 ai:2 rd:1 vanilla:3 consistency:1 sastry:2 similarly:2 trivially:1 mathematics:2 shawe:4 access:1 showed:2 recent:1 perspective:1 certain:2 inequality:9 errs:1 yi:4 seen:1 additional:4 gentile:2 determine:1 monotonically:1 smooth:2 technical:1 long:9 prediction:7 regression:2 expectation:1 kernel:35 wkj:1 malicious:12 sch:3 subject:1 induced:1 sridharan:2 jordan:1 structural:1 vw:14 revealed:1 canadian:1 easy:2 enough:1 xj:1 fit:1 agnostically:1 inner:3 reprinted:1 tradeoff:1 whether:3 expression:1 bartlett:4 hingeloss:1 improperly:1 involve:1 extensively:1 schapire:5 sign:5 rosenblatt:3 write:1 nevertheless:1 birnbaum:1 clean:3 relaxation:1 fraction:1 run:2 fourth:1 decision:1 appendix:2 bound:28 oracle:3 annual:1 adapted:1 constraint:5 as2:1 bousquet:2 klivans:3 min:9 argument:1 attempting:1 separable:1 influential:1 according:2 ball:2 smaller:1 slightly:2 making:2 gradually:2 taken:1 ln:1 equation:30 computationally:1 previously:1 loose:1 german:1 know:1 letting:1 aharon:1 apply:1 centrality:1 robustness:1 original:5 top:2 running:1 include:1 denotes:2 hinge:10 k1:3 question:4 already:1 concentration:1 dependence:1 surrogate:11 majority:1 besides:1 relationship:1 ellipsoid:1 minimizing:6 hebrew:1 loker:1 statement:1 negative:1 design:2 proper:5 unknown:4 l01:13 bianchi:5 upper:12 conversion:3 observation:1 finite:2 immediate:2 extended:2 precise:1 blais:2 y1:5 rn:1 mansour:1 arbitrary:3 august:2 david:9 introduced:1 namely:4 required:7 specified:1 extensive:1 established:1 wimmer:1 nip:2 israeli:2 able:1 beyond:1 pattern:1 xm:5 program:1 max:9 misclassification:2 natural:1 rely:3 predicting:1 solvable:1 ready:1 kj:3 review:1 understanding:1 acknowledgement:1 freund:5 loss:42 interesting:2 srebro:1 foundation:3 degree:1 summary:1 supported:1 last:1 allow:1 senior:1 perceptron:14 barrier:1 curve:3 transition:1 doesn:1 polynomially:1 excess:2 approximate:3 emphasize:1 ignore:1 wk1:1 conclude:1 tuples:1 xi:6 shwartz:11 learn:2 transfer:4 symmetry:1 poly:6 interpolating:1 constructing:2 did:1 main:6 whole:1 noise:12 x1:5 wiley:1 explicit:1 exponential:4 third:1 theorem:26 enlarging:1 e4:3 specific:4 xt:7 pac:6 svm:9 exists:7 mendelson:2 vapnik:3 valiant:2 phd:1 margin:13 scalar:1 corresponds:2 minimizer:1 satisfies:23 relies:1 conditional:1 goal:3 careful:1 replace:3 lipschitz:2 change:3 infinite:1 except:1 wt:4 lemma:9 vote:1 perceptrons:1 formally:1 support:3 latter:1 violated:1 overload:1 |
3,858 | 4,493 | FastEx: Hash Clustering with Exponential Families
Amr Ahmed?
Research at Google, Mountain View, CA
[email protected]
Sujith Ravi
Research at Google, Mountain View, CA
[email protected]
Alexander J. Smola
Research at Google, Mountain View, CA
[email protected]
Shravan M. Narayanamurthy
Microsoft Research, Bangalore, India
[email protected]
Abstract
Clustering is a key component in any data analysis toolbox. Despite its importance, scalable algorithms often eschew rich statistical models in favor of simpler
descriptions such as k-means clustering. In this paper we present a sampler, capable of estimating mixtures of exponential families. At its heart lies a novel
proposal distribution using random projections to achieve high throughput in generating proposals, which is crucial for clustering models with large numbers of
clusters.
1
Introduction
Fast clustering algorithms are a staple of exploratory data analysis. See e.g. [1] and references.
Clustering is useful for partitioning data into sets of similar items. Such tools are vital e.g. in large
scale document analysis, or to provide a modicum of adaptivity to content personalization for a large
basis of users [2, 3]. Likewise it allows advertisers to target specific slices of the user base of an
internet portal. While similarity and prototype based techniques [4, 5] satisfy a large range of these
requirements, they tend to be less useful for the purpose of obtaining a proper probabilistic representation of the data. The latter, is useful for determining typical and unusual events, forecasting traffic,
information retrieval, and when the results require integration into a larger probabilistic model.
Large scale problems, however, come with a rather surprising dilemma: as we increase the amount
of data we can both estimate the model parameters for fixed model complexity (typically the number
of clusters) more accurately. As a consequence we have the opportunity (and need) to increase the
number of parameters, e.g. clusters. The latter is often ignored but vital to the rationale for using
more data ? after all, for fixed model complexity there are rapidly diminishing returns afforded by
extra data once a given threshold is exceeded. See also [6, 7] for a frequentist perspective. Simply
put, it is a waste of computational resources to design algorithms capable of processing big data to
build a simple model (e.g. millions of documents for tens of clusters).
Contributions We address the following problems: We need to deal with a large number of instances, e.g. by means of multicore sampling and we need to draw from a large number of clusters.
When sampling from many clusters, the time to compute the object likelihood with respect to all
clusters dominates the inference procedure. For instance, for 1000 clusters and documents of 1000
words a naive sampler needs to perform 106 floating point operations. We can expect that a single
sample will cost in excess of 1 milisecond. Given 10M documents this amounts to approximately 3
hours for a single Gibbs sampling iteration, which is clearly infeasible: sampling requires hundreds
?
This work was carried out while AA, SR, SMN and AJS were with Yahoo Research.
1
of passes. This problem is exacerbated for hierarchical models. To alleviate this issue we use binary
hashing to compute a fast proposal distribution.
2
Mixtures of Exponential Families
Our models are mixtures of exponential families due to their flexilibility. This is essentially an
extended model of [8, 9]. For convenience we focus on mixtures of multinomials with correspondingly conjugate Dirichlet distributions. The derivations are entirely general and can be used e.g.
for mixtures of Gaussians or Poisson distributions. In the following we denote by X the domain of
observations X = {x1 , . . . , xm } drawn from some distribution p. We want to estimate p.
2.1
Exponential Families
We begin with a primer. In exponential families distributions over random variables are given by
p(x; ?) = exp (h?(x), ?i ? g(?)) .
(1)
Here ? : X ? F is a map from x to the vector space of sufficient statistics (for simplicity assume
that F is a Hilbert space) and ? ? F. Finally, g(?) ensures that p(x; ?) is properly normalized via
Z
g(?) := log
exp (h?(x), ?i) d?(x)
(2)
X
Here ? is the measure associated with X (e.g. the Lebesgue measure L2 or a weighted counting
measure for the Poisson distribution). It is well known [10] that the mean parameter associated with
(1) and the maximum likelihood estimate given X are connected via ?[?] = ?[X] where
m
1 X
?[?] := Ex?p(x;?) [?(x)] = ?? g(?) and ?[X] :=
?(xi ).
(3)
m i=1
The mean must match the empirical average for it to be a maximum likelihood estimate.
Example 1 (Multinomial) Assume that ?(x) = ex ? Rl and X = {1, . . . , d}, i.e. we have a set
of d different outcomes and ex denotes the canonical vector associated with x. Empirical averages
and probability estimates are directly connected via p(x; ?) = nx /m = e?x . Here nx denotes the
number of times we observe x. This yields ?x = log nx /m and g(?) = 0.
2.2
Conjugate Priors
In general, high-dimensional maximum likelihood estimation is statistically infeasible and we require a prior on ? to obtain reliable estimates. We could impose a norm prior on ?, leading to Laplace
or Gaussian priors. Alternatively one may resort to conjugate priors. They have the property that
the posterior distribution p(?|X) over ? remains in the same family as p(?) via
p(?|m0 , m0 ?0 ) = ehm0 ?0 ,?i?m0 g(?)?h(m0 ,m0 ?0 ) .
(4)
Here the conjugate prior itself is a member of the exponential family with sufficient statistic ?(?) =
(?, ?g(?)) and with natural parameters (m0 , m0 ?0 ). Commonly m0 is referred to as concentration
parameter which acts as an effective sample size and ?0 is the mean parameter describing where on
the marginal polytope we expect the distribution to be. Note that ?0 ? F. It corresponds to the mean
of a putative distribution over observations (in a Dirichlet process this is the base measure and m0 is
the concentration parameter). Finally, h(m0 , m0 ?0 ) is a log-partition function in the parameters of
the conjugate prior. For instance, for the discrete distribution we have the Dirichlet, for the Gaussian
the Gauss-Wishart, and for the Poisson distribution the Gamma. Normalization in (4) implies
p(?|X) ? p(X|?)p(?|m0 , m0 ?0 ) =? p(?|X) = p (?|m0 + m, m0 ?0 + m?[X])
(5)
We simply add the virtual observations m0 ?0 described by the conjugate prior to the actual observations X and compute the maximum likelihood estimate with respect to the augmented dataset.
Example 2 (Multinomial) We simply update the empirical observation counts. This yields the
smoothed estimates for event probabilities in x:
nx + m0 [?0 ]x
nx + m0 [?0 ]x
p(x; ?) =
and equivalently ?x = log
.
(6)
m + m0
m + m0
2
2.3
Mixture Models
The final piece is to describe the prior over mixture components. Our tools are entirely general
and could take advantage of Bayesian nonparametrics, such as the Dirichlet process or the PitmanYor process. For the sake of brevity and to ensure computational tractability (we need to limit the
time it takes to sample from the cluster distribution for a given instance) we limit ourselves to a
Dirichlet-Multinomial model with k components:
? Draw discrete mixture p(y|?) with y ? {1, . . . k} from Dirichlet with (mcluster
, ?cluster
).
0
0
? For each component k draw exponential families distribution p(x|?y ) from conjugate with
).
, ?component
parameters (mcomponent
0
0
? For each i first draw component yi ? p(y|?), then draw observation xi ? p(x|?yi ).
Note that we have two exponential families components here ? a smoothed multinomial to capture
cluster membership, i.e. y ? p(y|?) and one pertaining to the cluster distribution. Both parts could
be joined into a single exponential family model with y being the latent variable, a property that we
will exploit only for the purpose of fast sampling.
The venerable EM algorithm [8] is effective for a small number of clusters. For large numbers,
however, Gibbs sampling of the collapsed likelihood is computationally more advantageous since it
only requires updates of the sufficient statistics of two clusters per sample, whereas EM necessitates
an update of all clusters. Collapsed Gibbs sampling works as follows:
1. For a given xi draw yi ? p(yi |X, Y ?i ) ? p(yi |Y ) ? p(xi |yi , X ?i , Y ?i ).
2. Update the sufficient statistics for the changed clusters.
For large k step 1, particularly computing p(xi |yi , X ?i , Y ?i ) dominates the inference procedure.
We now show how this step can be accelerated significantly using a good proposal distribution,
parallel sampling, and a Taylor expansion for general exponential families.
3
3.1
Acceleration
Taylor Approximation for Collapsed Inference
Let us briefly review the key equations involved in collapsed inference. Conjugate priors allow us to
integrate out the natural parameters ? and accelerate mixing in Gibbs samplers [11]. We can obtain
a closed form expression for the data likelihood:
Z
p(X|m0 , m0 ?0 ) = p(X|?)p(?|m0 , m0 ?0 )d? = eh(m0 +m,m0 ?0 +m?[X])?h(m0 ,m0 ?0 ) . (7)
By Bayes rule this implies that
p(x|X, m0 , ?0 ) ? p(X ? {x} |m0 , m0 ?0 ) ? eh(m0 +m+1,m0 ?0 +m?[X]+?(x))
(8)
Unfortunately the normalization h is often nontrivial to compute or even intractable. The exception
being the multinomial, where the Laplace smoother amounts to the correct posterior x|X, i.e.
p(x|X, ?0 , m0 ) =
nx + m0 [?0 ]x
.
m + m0
(9)
In general, unfortunately, (8) will not have quite so simple form. Strictly speaking we would need
to compute h and perform the update directly. This can be prohibitively costly or even impossible
depending on the choice of sufficient statistics. While not necessary for our running example we
state the reasoning below to indicate that the problem can be overcome quite easily.
We exploit the properties of the log-partition function h of the conjugate prior for an approximation:
? h(m0 , ?0 m0 ) =
(m0 ,m0 ?0 )
E
[?g(?), ?] =: (?? ? , ?? )
??p(?|m0 ,m0 ?0 )
hence h(m0 + 1, m0 ?0 + ?(x)) ? h(m0 , m0 ?0 ) + h?? , ?(x)i ? ? ? .
3
(10)
Here ? ? is the expected value of the log partition function. This quantity is often hard to compute
and fortunately unnecessary for inference since ?? immediately implies a suitable normalization.
Applying the Taylor expansion in h to (7) yields an approximation of x|X as
p(x|X, m0 , m0 ?0 ) ? exp (h?(x), ?? i ? g(?? ))
(11)
Here the normalization g(?? ) is an immediate consequence of the fact that this is a member of the
exponential family. The key advantage of (11) is that nowhere do we need to compute h directly
(the latter may not be available in closed form). We only need to estimate the parameter ?? .
Lemma 1 The expected parameter ?? = E??p(?|X) [?] induces at most O(m?1 ) sampler error.
P ROOF. The contribution of a single instance to the sufficient statistics is O(m?1 ). Since h is
C? , the residual of the Taylor expansion is bounded by O(m?1 ).
Hence, (11) explains why updates obtained in collapsed inference often resemble (or are identical
to) a maximum-a-posteriori estimate obtained by conjugate priors, such as in Dirichlet-multinomial
smoothing. The computational convenience afforded by (11) is well justified statistically.
3.2
Locality Sensitive Importance Sampling
The next step is to accelerate the inner product ?(x), ?y? in (11) since this expression is evaluated
k times at each Gibbs sampler step. For large k this is the dominant term. We overcome this
problem by using binary hashing [12]. This provides a good approximation and therefore a proposal
distribution that can be used in a Metropolis-Hastings scheme without an excessive rejection rate.
To provide some motivation consider metric-based clustering algorithms such as k-means. They do
not suffer greatly from dealing with large numbers of clusters ? after all, we only need to find the
closest prototype. Finding the closest point within a set in sublinear time is a well studied problem
[13, 14, 15, 16]. In a nutshell it involves transforming the set of cluster centers into a data structure
that is only dependent on the inherent dimensionality of the data rather than the number of objects
or the dimensionality of the actual data vector.
The problem with sampling from the collapsed distribution is that for a proper sampler we need to
consider all cluster probabilities including those related to clusters which are highly implausible and
unlikely to be chosen for a given instance. That is, most of the time we discard the very computations
that made sampling so expensive. This is extremely wasteful. Instead, we design a sampler which
typically will only explore the clusters which are sufficiently close to the ?best? matching cluster by
means of a proposal distribution. [17, 12] effectively introduce binary hash functions:
Theorem 2 For u, v ? Rn and vectors w drawn from a spherically symmetric distribution on Rn
the following relation between signs of inner products and the angle ](u, v) between vectors holds:
](u, v) = ? Pr {sgn [hu, wi] 6= sgn [hv, wi]}
(12)
This follows from a simple geometric observation, namely that only whenever w falls into the angle
between the unit vectors in the directions of u and v we will have opposite signs. Any distribution
of w orthogonal to the plane containing u, v is immaterial.
Since exponential families rely on inner products to determine the log-likelihood of how well the
data fits, we can use hashing to accelerate the expensive part considerably, namely comparing data
with clusters. More specifically, hu, vi = kuk ? kvk ? cos ](u, v) allows us to store the signature of
a vector in terms of its signs and its norm to estimate the inner product efficiently.
l
Definition 3 We denote by hl (v) ? {0, 1} a binary hash of v and by z l (u, v) an estimate of the
probability of matching signs, obtained as follows
l
1
h (v) i := sgn [hv, wi i] where wi ? Um fixed and z l (u, v) := kh(u) ? h(v)k1 .
l
4
(13)
That is, z l (u, v) measures how many bits differ between the hash vectors h(u) and h(v) associated with u, v. In this case we may estimate the unnormalized log-likelihood of an instance being
assigned to a cluster via
sl (x, y) = k?y k k?(x)k cos ?z l (?(x), ?y ) ? g(?y ) ? log ny
(14)
We omitted the normalization log n of the cluster probability since it is identical for all components.
The above can be computed efficiently for any combination of x and y since we can precompute
(and store) the values for k?y k , k?(x)k , g(?y ), log ny , and h(?(xi )) for all observations xi .
The binary representation is significant since on modern CPUs computing the Hamming distance
between h(u) and h(v) via z l (u, v) can be achieved in a fraction of a single clock cycle by means of
a vectorized instruction set. This is supported by current generation ARM and Intel CPU cores and
by AMD and Nvidia GPUs (for instance Intel?s SandyBridge series of processors can process up to
256 bits in one clock cycle per core) and easily accessible via compiler optimization.
3.3
Error Guarantees
Note, though, that sl (x, y) is not accurate, since we only use an estimate of the inner product. Hence
we need to accommodate for sampling error. The following probabilistic guarantee ensures that we
can turn sl (x, y) into an upper bound of the likelihood.
Theorem 4 Given k ? N mixture components and let l the number of bits used for hashing. Then
the unnormalized cluster log-likelihood is bounded with probability at least 1 ? ? by
h
i
p
s?l (x, y) = k?y k k?(x)k cos ? max 0, z l (?(x), ?y ) ? (log k/?) /2l ? g(?y ) ? log ny (15)
P ROOF. By Theorem 2 we know that in expectation the inner product can be computed via the
probability of a matching sign. Since we only take a finite sample average we effectively partition
this into l equivalence classes. For convenience denote by z ? (?(x), ?y ) the expected value of
z l (?(x), ?y ) over all hash functions. By Hoeffding?s theorem we know that
2
Pr z ? (?(x), ?y ) < z l (?(x), ?y ) ? ? e?2l
(16)
p
?
Solving for yields ? (? log ?)/2l. Since we know that z (?(x), ?y ) ? 0 we can bound it for
all k clusters with probability ? by taking the union bound over all events with ?/k probability.
Remark 5 Using 128 hash bits and with a failure probability of at most 10?4 for k = 104 clusters
the correction applied to z l (x, z) is less than 0.38.
Note that in practice we can reduce this correction factor significantly for two reasons: firstly, for
small probabilities the basic Chernoff bound is considerably loose and we would be better advised
to take the KL-divergence terms in the Chernoff bound directly into account, since the probability
of deviation is bounded in terms of e?mD(pkp?) . Secondly, we use hashing to generate a proposal
distribution: once we select a particular cluster we verify the estimate using the true likelihood.
3.4
Metropolis Hastings
An alternative to using the approximate upper bound directly, we employ it as a proposal distribution
in a Metropolis Hastings (MH) framework. Denote by q the proposal distribution constructed from
the bound on the log-likelihood after normalization. For a given xi we first sample a new cluster
assignment yinew ? q(.) and then accept the proposal using (15) with probability r where
l
q(y) ? es? (x,y) and r =
new
i
q(y old ) p(yi )p(xi |Xyinew , m0 , ?0 )
q(yinew ) p(yiold )p(xi |Xyi old , m0 , ?0 )
(17)
i
Here p(xi |X, m0 , ?0 ) is the true collapsed conditional likelihood of (8). The specific form depends
on h(.) as discussed in Section 3.1.
Note that for a standard collapsed Gibbs sampler, p(x|X, ?0 , m0 ) would be computed for all k candidate clusters, however, in our framework, we only need to compute it for 2 clusters: the proposed
and old clusters: an O(k) time saving per sample, albeit with a nontrivial rejection probability.
5
Example 3 For discrete distributions the conjugate is the Dirichlet distribution Dir(?1:d ) with
components given by ?j = m0 [?0 ]j and the sum of the components is given by m0 , where
j ? {1 ? ? ? d}. In this case p(x|X, ?0 , m0 ) reduces to predictive distribution given in (9) if x is
a singleton, i.e. a single observation, and to the ratio of two log partition functions if x is nonsingleton.1 We have the following predictive posterior
PD
D
yi
Y
?
? xd + nyd i + ?d
d=1 [nd + ?d ]
p(xi |X, yi , ?0 , m0 ) =
.
(18)
PD
yi
? nyd i + ?d
?
d=1 [xd + nd + ?d ] d=1
3.5
Updating the Sufficient Statistics
We conclude our discussion of past proposals by discussing the updates involved in the sufficient
statistics. For the sake of brevity we focus on multinomial models. For Gaussians changes in
sufficient statistics can be achieved using a low rank update of the second order matrix and its
inverse. Similar operations apply to other exponential family distributions.
Whenever we assign an instance x to a new cluster we need to update the sufficient statistics of the
old cluster y and the new cluster y 0 via
(my ? 1)?[X|y] ? my ?[X|y] ? ?(x)
0
0
(my0 + 1)?[X|y ] ? my0 ?[X|y ] + ?(x)
my ? my ? 1
my 0 ? my 0 + 1
Here ?[X|y] denotes the sufficient statistics for cluster y, i.e. it is the sufficient statistic obtained
from X by considering only instances for which yi = y. Likewise my the number of instances
associated with y. This is then used to update the natural parameter ?y and the hash representation
h(?y ). For multinomials the mean natural parameters are just log counts. Thus these updates scale
as O(W ) where W is the number of unique items (e.g. words in a document) in x (for Gaussians
the cost is O(d2 ) where d is the dimensionality of the data).
The second step is to update the hash-representation. For l bits a naive update would perform the
dot-product between the mean natural parameters and each random vector, which scales as O(Dl),
where D is the vocabulary size. However we can cache the l dot product values (as floating point
numbers) for each cluster and update only those dot product values. Thus if x has W unique words,
we only incur an O(W l) penalty. Note that we never need to store the random vectors w since we
can compute them on the fly by means of hash functions rather than invoking a random number
generator. We use murmurhash as a fast and high quality hash function.
4
4.1
Experiments
Data and Methods
To provide a realistic comparison on publicly available datasets we used documents from the
Wikipedia collection. More specifically, we extracted the articles and category attributes from a
dump of its database. We generated multiple datasets for our experiments by first sampling a set of
k categories and then by pooling all the articles from the chosen categories to form a document collection. This way the data was comparable and the apparent and desired diversity in terms of cluster
sizes was matched. We extracted both 100 and 1000 categories, yielding the following datasets:
W100
100 clusters 292k articles 2.5M unique words vocabulary
W1000 1000 clusters 710k articles 5.6M unique words vocabulary
We compare our fast sampler to a more conventional uncollapsed inference procedure. That is, we
compare the following two algorithms:
Baseline Clustering using a Dirichlet (DP) Multinomial Mixture model. It uses an uncollapsed likelihood and alternates between sampling cluster assignments and drawing from the Dirichlet
distribution of the posterior.
1
x might represent a entire document [x]d denoting the count of word d in x. The predictive distribution
follows. This can be understood if we let ?(x) = ex in the singleton case, and let ?(x) = ([x]1 , ? ? ? , [x]D ) in
the bag-of-words case. The natural parameters of the multinomial remain the same in both cases.
6
100 clusters, 292k articles
100 clusters, 292k articles
14
Baseline
FastEx (8 bit)
FastEx (16 bit)
FastEx (32 bit)
FastEx (64 bit)
FastEx (128 bit)
Baseline
FastEx (32 bit)
13
13
12
12
11
11
Clustering quality (VI)
Clustering quality (VI)
14
10
9
8
10
9
8
7
7
6
6
5
5
10
100
1000
10000
100000
0
1e+06
Time (in seconds)
2000
4000
6000
8000
Time (in seconds)
10000
12000
14000
Figure 1: (Left) Convergence of both a baseline implementation and of FastEx. (Right) The effect
of the hash size on performance. Note that the baseline implementation only finishes few iterations
while our method almost finishes convergence.
FastEx We provide runtime results for a single core (our approach supports multi-core architectures, as discussed in the summary). Unless stated otherwise we use l = 32 bit to represent
a document and cluster. This choice was made since it provides an efficient trade-off between memory usage and cost to compute the hash signatures.
4.2
Evaluation
For each clustering method, we report results in terms of two different measures: efficiency and
clustering quality. The former is measured in terms of average run time. For the latter we use the
fact that we have access to the Wikipedia category tag of each article which we treat as the gold
standard for evaluation purposes.
We report results in terms of Variation of Information (VI) [18]. The latter is a standard measure of
the distance between two clusterings. Suppose we have two clusterings (partition of a document set
into several subsets) C1 and C2 then:
VI(C1 , C2 ) = H(C1 ) + H(C2 ) ? 2I(C1 , C2 )
(19)
where H(.) is entropy and I(C1 , C2 ) is mutual information between C1 and C2 . A lower value for
VI implies a closer match to the gold standard and better quality.
We first report our results on the W100 dataset. As shown in Figure 1 our method is an order of
magnitude faster than the baseline. Hence we use a log-scale for the time axis. As evident from this
Figure, our method both converges much faster than the baseline and achieves the same clustering
quality. Figure 1 also displays the effect of the number of hash bits l on solution quality. We vary
l ? 8, 16, ? ? ? , 128 and draw the VI curve as the time goes by. As evident form the figure, increasing
the number of bits caused our method to converge faster due to a tighter bound on the log-likelihood
and thus a higher acceptance ratio. We also observe that beyond 64 to 128 bits we do not observe
any noticeable improvement as predicted by our theoretical guarantees.
To see how the performance of our method changes as we increase the number of clusters, we show
in Table 1 both the time required to compute the proposal distribution for a given document and the
time it takes to perform the full sampling per document which includes: proposal time + time to
compute acceptance ratio + time to update the clusters sufficient statistics and hash representation.
As shown in this Table, thanks to the fast instruction set support for XOR and bitcount operations on
modern processors, the time does not increase significantly as we increase the number of clusters and
the overall time increases modestly as the number of clusters increases. Compare that to standard
Collapsed Gibbs sampling in which the time scales linearly with the number of clusters.
7
Proposal
Total
Proposal
Total
Clusters k
100
Bitsize l
1000
8
2.34
69.52
18.80
103.91
16
2.34
69.52
18.80
103.91
32
2.34
78.77
18.80
103.91
64
2.56
81.16
21.42
108.98
128
2.90
82.19
29.12
114.61
Table 1: Average time in microseconds spent per document for hash sampling in terms of computing
the proposal distribution and total computation time. As can be seen, the total computation time for
sampling 10x more clusters only increases slightly, mostly due to the increase in proposal time.
Dataset
W100
W1000
FastEx Quality (VI)
5.04
14.10
Baseline Quality (VI)
5.60
14.00
Speedup
9.25
37.37
Table 2: Clustering quality (VI) and absolute speedup achieved by hash sampling over the baseline
(DP) clustering for different Wikipedia datasets.
Table 2 has details on the final quality and speed up achieved by our method over the baseline. Due
to a high quality proposals the time to draw from 1000 rather than 100 clusters increases slightly.
5
Discussion and Future Work
We presented a new efficient parallel algorithm to perform scalable clustering for exponential families. It is general and uses techniques from hashing and information retrieval to circumvent the
problem of large numbers of clusters. Future work includes the application to a larger range of
exponential family models and the extension of the fast retrieval scheme to hierarchical clustering.
Parallelization So far we only described a single processor sampling procedure. Unfortunately
this is not scalable given large amounts of data. To address the problem within single machines we
use a multicore sampler to parallelize inference. This requires a small amount of approximation
? rather than sampling p(yi |xi , X ?i , Y ?i ) in sequence we sample up to c latent variables yi in
parallel in c processor cores. The latter approximation is negligible since c is tiny compared to the
total number of documents we have. Our approach is an adaptation of the strategy described in [19].
In particular, we dissociate sampling and updating of the sufficient statistics to ensure efficient lock
management and to avoid resource contention.
sampler 1
disk
reader
sampler 2
..
.
updater
writer
disk
sampler n
A key advantage is that all samplers share the same sufficient statistics regardless of the number of
cores used. By delegating write permissions to a separate updater thread the code is considerably
simplified. This allows us to be parsimonious in terms of memory use. A multi-machine setting is
also achievable by keeping the sets of sufficient statistics synchronized between computers. This is
possible using the synchronization architecture of [20].
Sequential Estimation Our approach is compatible with sequential estimation methods and it is
possible to use hash signatures for Sequential Monte Carlo estimation for clustering as in [21, 22].
However it is highly nontrivial to parallelize particle filters over a network of workstations.
Stochastic Gradient Descent An alternative is to use stochastic gradient descent on a variational
approximation, following the approach proposed by [23]. Again, sampling is the dominant cost for
inference and it can be accelerated by binary hashing.
8
References
[1] C. D. Manning, P. Raghavan, and H. Sch?utze. Introduction to Information Retrieval. Cambridge University Press, 2008.
[2] D. Agarwal and S. Merugu. Predictive discrete latent factor models for large scale dyadic data.
Conference on Knowledge Discovery and Data Mining, pages 26?35. ACM, 2007.
[3] A. Das, M. Datar, A. Garg, and S. Rajaram. Google news personalization: scalable online
collaborative filtering. In Conference on World Wide Web, pages 271?280. ACM, 2007.
[4] D. Emanuel and A. Fiat. Correlation clustering ? minimizing disagreements on arbitrary
weighted graphs. Algorithms ? ESA 2003, 11th Annual European Symposium, volume 2832
of Lecture Notes in Computer Science, pages 208?220. Springer, 2003.
[5] J. MacQueen. Some methods of classification and analysis of multivariate observations. In
L. M. LeCam and J. Neyman, editors, Proc. 5th Berkeley Symposium on Math., Stat., and
Prob., page 281. U. California Press, Berkeley, CA, 1967.
[6] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M -estimators with decomposable regularizers. CoRR, abs/1010.2731,
2010. informal publication.
[7] V. Vapnik and A. Chervonenkis. The necessary and sufficient conditions for consistency in the
empirical risk minimization method. Pattern Recognition and Image Analysis, 1(3):283?305,
1991.
[8] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum Likelihood from Incomplete Data
via the EM Algorithm. Journal of the Royal Statistical Society B, 39(1):1?22, 1977.
[9] C. E. Rasmussen. The infinite gaussian mixture model. In Advances in Neural Information
Processing Systems 12, pages 554?560, 2000.
[10] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1(1 ? 2):1 ? 305, 2008.
[11] T.L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy
of Sciences, 101:5228?5235, 2004.
[12] M. Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings of
the thiry-fourth annual ACM symposium on Theory of computing, pages 380?388, 2002.
[13] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. In International
Conference on Machine Learning, 2006.
[14] A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In M. P.
Atkinson, M. E. Orlowska, P. Valduriez, S. B. Zdonik, and M. L. Brodie, editors, Proceedings
of the 25th VLDB Conference, pages 518?529, Edinburgh, Scotland, 1999. Morgan Kaufmann.
[15] Y. Shen, A. Ng, and M. Seeger. Fast Gaussian process regression using kd-trees. In Y. Weiss,
B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18,
pages 1227?1234, Cambridge, MA, 2005. MIT Press.
[16] R.J. Bayardo, Y. Ma, and R. Srikant. Scaling up all pairs similarity search. In Proceedings of
the 16th international conference on World Wide Web, pages 131?140. ACM, 2007.
[17] M.X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut
and satisfiability problems using semidefinite programming. Journal of the ACM, 42(6), 1995.
[18] M. Meila. Comparing clusterings by the variation of information. In COLT, 2003.
[19] A.J. Smola and S. Narayanamurthy. An architecture for parallel topic models. In Very Large
Databases (VLDB), 2010.
[20] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A.J. Smola. Scalable inference in
latent variable models. In Web Science and Data Mining (WSDM), 2012.
[21] A. Ahmed, Q. Ho, C. H. Teo, J. Eisenstein, A. J. Smola, and E. P. Xing. Online inference for
the infinite cluster-topic model: Storylines from streaming text. In AISTATS, 2011.
[22] A. Ahmed, Q. Ho, J. Eisenstein, E. P. Xing, A. J. Smola, and C. H. Teo. Unified analysis of
streaming news. In www, 2011.
[23] D. Mimno, M. Hoffman, and D. Blei. Sparse stochastic inference for latent dirichlet allocation.
In International Conference on Machine Learning, 2012.
9
| 4493 |@word briefly:1 achievable:1 norm:2 advantageous:1 nd:2 disk:2 instruction:2 hu:2 d2:1 vldb:2 invoking:1 accommodate:1 series:1 chervonenkis:1 denoting:1 document:14 past:1 current:1 com:3 comparing:2 surprising:1 beygelzimer:1 gmail:1 must:1 realistic:1 partition:6 update:15 hash:17 item:2 plane:1 scotland:1 core:6 blei:1 provides:2 math:1 org:1 simpler:1 firstly:1 constructed:1 c2:6 symposium:3 introduce:1 expected:3 multi:2 wsdm:1 actual:2 cpu:2 cache:1 considering:1 increasing:1 begin:1 estimating:1 bounded:3 matched:1 mountain:3 unified:2 finding:2 guarantee:3 berkeley:2 act:1 xd:2 nutshell:1 runtime:1 prohibitively:1 um:1 platt:1 partitioning:1 unit:1 negligible:1 understood:1 treat:1 limit:2 consequence:2 despite:1 parallelize:2 datar:1 approximately:1 advised:1 pkp:1 might:1 garg:1 studied:1 equivalence:1 co:3 range:2 statistically:2 unique:4 union:1 practice:1 procedure:4 empirical:4 significantly:3 projection:1 matching:3 word:7 griffith:1 staple:1 convenience:3 close:1 put:1 collapsed:9 impossible:1 applying:1 risk:1 www:1 conventional:1 map:1 center:1 go:1 regardless:1 shen:1 modicum:1 simplicity:1 decomposable:1 immediately:1 rule:1 estimator:1 steyvers:1 exploratory:1 variation:2 laplace:2 target:1 suppose:1 w1000:2 user:2 programming:1 us:2 trend:1 nowhere:1 expensive:2 particularly:1 updating:2 recognition:1 cut:1 database:2 fly:1 capture:1 hv:2 ensures:2 connected:2 cycle:2 dissociate:1 news:2 trade:1 transforming:1 pd:2 complexity:2 dempster:1 venerable:1 signature:3 immaterial:1 solving:1 predictive:4 dilemma:1 incur:1 writer:1 efficiency:1 basis:1 necessitates:1 accelerate:3 easily:2 mh:1 derivation:1 fast:8 effective:2 describe:1 pitmanyor:1 pertaining:1 monte:1 outcome:1 quite:2 apparent:1 larger:2 drawing:1 otherwise:1 favor:1 statistic:16 itself:1 laird:1 final:2 online:2 indyk:1 advantage:3 sequence:1 product:9 adaptation:1 rapidly:1 mixing:1 achieve:1 gold:2 bitcount:1 academy:1 description:1 kh:1 olkopf:1 convergence:2 cluster:55 requirement:1 motwani:1 generating:1 uncollapsed:2 converges:1 object:2 spent:1 depending:1 stat:1 measured:1 nearest:1 multicore:2 noticeable:1 exacerbated:1 predicted:1 resemble:1 come:1 implies:4 indicate:1 synchronized:1 direction:1 involves:1 differ:1 correct:1 attribute:1 filter:1 stochastic:3 sgn:3 raghavan:1 virtual:1 explains:1 require:2 assign:1 alleviate:1 tighter:1 secondly:1 strictly:1 extension:1 correction:2 hold:1 sufficiently:1 exp:3 brodie:1 m0:56 achieves:1 vary:1 omitted:1 utze:1 purpose:3 estimation:5 proc:1 bag:1 sensitive:1 teo:2 tool:2 weighted:2 hoffman:1 minimization:1 mit:1 clearly:1 gaussian:4 rather:5 avoid:1 publication:1 focus:2 properly:1 improvement:1 rank:1 likelihood:17 greatly:1 seeger:1 baseline:10 posteriori:1 inference:13 dependent:1 membership:1 streaming:2 entire:1 typically:2 unlikely:1 accept:1 diminishing:1 relation:1 issue:1 overall:1 classification:1 colt:1 yahoo:1 smoothing:1 integration:1 mutual:1 marginal:1 once:2 saving:1 never:1 ng:1 sampling:23 chernoff:2 identical:2 yu:1 throughput:1 excessive:1 future:2 report:3 bangalore:1 inherent:1 employ:1 modern:2 few:1 gamma:1 divergence:1 national:1 roof:2 floating:2 ourselves:1 lebesgue:1 microsoft:1 ab:1 acceptance:2 highly:2 mining:2 evaluation:2 mixture:11 kvk:1 personalization:2 yielding:1 semidefinite:1 regularizers:1 accurate:1 capable:2 closer:1 necessary:2 orthogonal:1 unless:1 tree:2 incomplete:1 taylor:4 old:4 desired:1 theoretical:1 instance:11 cover:1 assignment:2 cost:4 tractability:1 deviation:1 subset:1 hundred:1 rounding:1 dir:1 considerably:3 my:7 thanks:1 international:3 negahban:1 accessible:1 probabilistic:3 off:1 again:1 management:1 containing:1 hoeffding:1 wishart:1 resort:1 leading:1 return:1 account:1 singleton:2 diversity:1 waste:1 includes:2 gionis:1 satisfy:1 caused:1 vi:10 depends:1 piece:1 view:3 closed:2 shravan:1 traffic:1 compiler:1 xing:2 bayes:1 parallel:4 contribution:2 collaborative:1 publicly:1 xor:1 merugu:1 kaufmann:1 likewise:2 efficiently:2 yield:4 rajaram:1 bayesian:1 accurately:1 xyi:1 carlo:1 processor:4 implausible:1 whenever:2 definition:1 failure:1 involved:2 associated:5 hamming:1 workstation:1 emanuel:1 dataset:3 knowledge:1 dimensionality:3 satisfiability:1 hilbert:1 fiat:1 exceeded:1 hashing:8 higher:1 wei:1 improved:1 nonparametrics:1 evaluated:1 though:1 just:1 smola:6 clock:2 correlation:1 langford:1 hastings:3 web:3 google:6 quality:12 scientific:1 usage:1 effect:2 normalized:1 verify:1 true:2 former:1 hence:4 assigned:1 spherically:1 symmetric:1 deal:1 eisenstein:2 unnormalized:2 evident:2 reasoning:1 image:1 variational:2 novel:1 contention:1 wikipedia:3 multinomial:11 rl:1 volume:1 million:1 discussed:2 lecam:1 significant:1 cambridge:2 gibbs:7 meila:1 narayanamurthy:3 consistency:1 particle:1 dot:3 access:1 similarity:4 base:2 add:1 dominant:2 posterior:4 closest:2 multivariate:1 perspective:1 discard:1 store:3 nvidia:1 binary:6 discussing:1 yi:14 seen:1 morgan:1 fortunately:1 impose:1 determine:1 converge:1 advertiser:1 smoother:1 multiple:1 full:1 reduces:1 match:2 faster:3 ahmed:4 retrieval:4 ravikumar:1 scalable:5 basic:1 regression:1 essentially:1 metric:1 poisson:3 expectation:1 iteration:2 normalization:6 represent:2 agarwal:1 achieved:4 c1:6 proposal:18 whereas:1 want:1 justified:1 crucial:1 sch:2 extra:1 parallelization:1 sr:1 pass:1 pooling:1 tend:1 member:2 jordan:1 counting:1 vital:2 fit:1 finish:2 architecture:3 opposite:1 inner:6 reduce:1 prototype:2 thread:1 expression:2 forecasting:1 penalty:1 suffer:1 speaking:1 remark:1 ignored:1 useful:3 amount:5 ten:1 induces:1 category:5 generate:1 sl:3 canonical:1 srikant:1 sign:5 per:5 discrete:4 write:1 key:4 threshold:1 drawn:2 wasteful:1 smn:1 ravi:1 kuk:1 graph:1 bayardo:1 fraction:1 sum:1 run:1 angle:2 inverse:1 prob:1 fourth:1 family:18 almost:1 reader:1 putative:1 draw:8 parsimonious:1 gonzalez:1 scaling:1 comparable:1 bit:15 entirely:2 bound:8 internet:1 atkinson:1 display:1 annual:2 nontrivial:3 alex:1 afforded:2 updater:2 sake:2 tag:1 speed:1 extremely:1 gpus:1 speedup:2 charikar:1 alternate:1 combination:1 precompute:1 manning:1 conjugate:11 kd:1 remain:1 slightly:2 em:3 wi:4 kakade:1 metropolis:3 hl:1 pr:2 heart:1 computationally:1 resource:2 equation:1 remains:1 neyman:1 describing:1 count:3 turn:1 loose:1 know:3 unusual:1 informal:1 ajs:1 available:2 gaussians:3 operation:3 apply:1 observe:3 hierarchical:2 disagreement:1 frequentist:1 alternative:2 permission:1 ho:2 primer:1 denotes:3 dirichlet:11 clustering:22 ensure:2 running:1 graphical:1 opportunity:1 lock:1 exploit:2 k1:1 build:1 society:1 quantity:1 amr:1 strategy:1 concentration:2 costly:1 md:1 modestly:1 gradient:2 dp:2 distance:2 separate:1 nx:6 amd:1 polytope:1 topic:3 reason:1 code:1 ratio:3 minimizing:1 equivalently:1 unfortunately:3 mostly:1 stated:1 design:2 implementation:2 proper:2 perform:5 upper:2 observation:10 datasets:4 macqueen:1 finite:1 descent:2 immediate:1 extended:1 rn:2 smoothed:2 arbitrary:1 esa:1 aly:1 namely:2 required:1 toolbox:1 kl:1 pair:1 california:1 hour:1 address:2 beyond:1 below:1 pattern:1 xm:1 royal:1 eschew:1 reliable:1 including:1 max:1 memory:2 event:3 suitable:1 natural:6 eh:2 rely:1 circumvent:1 wainwright:2 residual:1 arm:1 scheme:2 axis:1 carried:1 naive:2 text:1 prior:12 review:1 l2:1 geometric:1 discovery:1 determining:1 synchronization:1 expect:2 lecture:1 rationale:1 adaptivity:1 sublinear:1 generation:1 filtering:1 allocation:1 generator:1 foundation:1 integrate:1 sufficient:17 vectorized:1 article:7 rubin:1 editor:3 tiny:1 share:1 compatible:1 changed:1 summary:1 supported:1 keeping:1 rasmussen:1 infeasible:2 allow:1 india:1 fall:1 wide:2 taking:1 correspondingly:1 neighbor:1 absolute:1 sparse:1 edinburgh:1 slice:1 overcome:2 curve:1 vocabulary:3 world:2 dimension:1 rich:1 mimno:1 commonly:1 made:2 collection:2 simplified:1 far:1 excess:1 approximate:1 dealing:1 unnecessary:1 conclude:1 xi:13 alternatively:1 search:2 latent:5 why:1 table:5 storyline:1 ca:4 obtaining:1 expansion:3 williamson:1 european:1 domain:1 da:1 aistats:1 linearly:1 thiry:1 big:1 motivation:1 dyadic:1 x1:1 augmented:1 referred:1 intel:2 dump:1 ny:3 exponential:17 lie:1 candidate:1 theorem:4 specific:2 dominates:2 dl:1 intractable:1 albeit:1 delegating:1 effectively:2 importance:2 sequential:3 corr:1 vapnik:1 magnitude:1 portal:1 rejection:2 locality:1 entropy:1 simply:3 explore:1 joined:1 springer:1 aa:1 corresponds:1 extracted:2 acm:5 ma:2 conditional:1 acceleration:1 microsecond:1 content:1 hard:1 change:2 typical:1 specifically:2 infinite:2 sampler:14 lemma:1 total:5 goemans:1 gauss:1 e:1 exception:1 select:1 highdimensional:1 support:2 latter:6 alexander:1 brevity:2 accelerated:2 ex:4 |
3,859 | 4,494 | Bayesian Warped Gaussian Processes
Miguel L?azaro-Gredilla
Dept. Signal Processing & Communications
Universidad Carlos III de Madrid - Spain
[email protected]
Abstract
Warped Gaussian processes (WGP) [1] model output observations in regression
tasks as a parametric nonlinear transformation of a Gaussian process (GP). The
use of this nonlinear transformation, which is included as part of the probabilistic
model, was shown to enhance performance by providing a better prior model on
several data sets. In order to learn its parameters, maximum likelihood was used.
In this work we show that it is possible to use a non-parametric nonlinear transformation in WGP and variationally integrate it out. The resulting Bayesian WGP
is then able to work in scenarios in which the maximum likelihood WGP failed:
Low data regime, data with censored values, classification, etc. We demonstrate
the superior performance of Bayesian warped GPs on several real data sets.
1 Introduction
In a Bayesian setting, the Gaussian process (GP) is commonly used to define a prior probability
distribution over functions. This leads to a simple and elegant probabilistic framework that allows
to solve, among others, regression and classification tasks, achieving state-of-the-art performance
[2, 3]. For a thorough treatment on GPs, the reader is referred to [4].
In the regression setting, output data are often modelled directly as observations from a GP. However,
it is shown in [1] that for some data sets, better models can be built if the observed outputs are
regarded as a nonlinear distortion (the so-called warping) of a GP instead. For a warped GP (WGP),
the warping function can take any parametric form, and in [1] the sum of a linear function and several
tanh functions is used. The parameters defining the transformation are then learned using maximum
likelihood. WGPs have the advantage of having a closed-form expression for the evidence and have
been applied in a number of works [5, 6], but also have several shortcomings: Maximum likelihood
learning might result in overfitting if a warping function with too many parameters is used (or if too
few data are available), it does not model additional output noise after the warping, it cannot model
?flat? warping functions for reasons explained below and, as a consequence, runs into problems
when observations are clustered (many output data take the same value). In this work we set out to
show that it is possible to place another GP prior on the warping function and variationally integrate
it out. By doing so, all of the aforementioned problems disappear and we can enjoy the benefits of
WGPs on a wider selection of scenarios.
The remainder of this work is organised as follows: In Section 2 we introduce the Bayesian WGP
model, which is analytically intractable. In Section 3, a variational lower bound on the exact evidence of the model is derived, which allows for approximate inference and hyperparameter learning.
We show the advantages of integrating out the warping function in Section 4, where we compare
the performance of the maximum likelihood and the Bayesian versions of warped GPs. Finally, we
wrap-up with some concluding remarks in Section 5.
1
2 The Bayesian warped Gaussian process model
Given a set of input values {xi ? RD }ni=1 and their associated targets {yi ? R}ni=1 , we define the
Bayesian warped Gaussian process (BWGP) model as
yi = g(f (xi )) + ?i
(1a)
where f (x) is a (possibly noisy) latent function with D-dimensional inputs, g(f ) is an arbitrary
warping function with scalar inputs and ? is a Gaussian noise term. Proceeding in a Bayesian
fashion, we place priors on g, f , and ?i . We use Gaussian process and normal priors
f (x) ? GP(?0 , k(x, x? )),
g(f ) ? GP(f, c(f, f ? )),
?i ? N (0, ? 2 ).
(1b)
Notice that by setting the prior mean on g(f ) to f , we assume that the warping is ?by default?
the identity. For f , any valid covariance function k(x, x? ) can be used, whereas for the warping
function g we use a squared exponential: c(f, f ? ) = ?g2 exp(?(f ? f ? )2 /(2?2 )). The mentioned
hyperparameters as well as those included in k(x, x? ) can be collected in ? ? {?k , ?g , ?, ?, ?0 }.
It might seem that since f (x) is already an arbitrary nonlinear function, further distorting its output
through g(f ) is an additional unnecessary complication. However, even though g(f (x)) can model
arbitrary functions just as f (x) is able to, the implied prior is very different since the composition
of two GPs g(f (x)) is no longer a GP. This is the same idea as with copulas, but here the warping
function g(f ) is treated in non-parametric form.
2.1 Relationship with maximum likelihood warped Gaussian processes
Though the idea of distorting a standard GP is common to WGP and BWGP, there are several
relevant differences worth clarifying:
In [1], noise is present only in latent function f (x) and observed data corresponds exactly to the
warping of f (x). BWGP has an additional noise term ? that can account for extra noise in the
observations after warping. This term can be neglected by setting ? 2 = 0.
BWGP places a prior on the warping function, instead of using a parametric definition, which allows
for maximum flexibility while avoiding overfitting. On the other hand, by choosing the number of
tanh functions in their parametric warping function, WGP sets a trade-off between both.
Finally, the definition of the warping function is reversed between BWGP and WGP. If no noise is
present, our warping function y = g(f ) maps latent space f to output space y. In contrast, in [1]
the inverse mapping f = w(y) is defined due to analytical tractability reasons. Because of this,
the warping function in [1] is restricted to be monotonic, so that it is possible to unambiguously
identify its inverse y = w?1 (f ) = g(f ) and thus define a valid probability distribution in output
space. Since we already work with the direct warping function g(f ), we do not need to impose any
constraint on it and thus can use a GP prior. Also, as discussed in [1], WGPs cannot deal properly
with models that involve a ?flat? region (i.e., g ? (f ) = 0) in the warping function (such as ordinal
regression or classification), since the inverse w(y) = g ?1 (y) is not well defined. These flat regions
result in probability masses in output space. In those cases, the probability density of data under
the WGP model (the evidence) will be infinity, so that it cannot be used for model selection and
numerical computation becomes unstable. None of this problems arise on BWGP, which can handle
both continuous and discrete observations and model warping functions with flat regions.
2.2 Relationship with other Gaussian processes models
For a given warping function g(f ), BWGP can be seen as a standard GP model with likelihood
p(yi |f (xi )) = N (yi |g(f (xi )), ? 2 ). Different choices for g(f ) result in different GP models:
? GP regression [7]: Corresponds to setting g(f ) = f (the mean in our prior).
? GP classification [3]: Corresponds to setting g(f ) = sign(f ) with yi ? {?1, +1} and
? 2 = 0. Using a noisy latent function f (x) as prior and a step function as likelihood is
equivalent to using a noiseless latent function as prior and normal cdf sigmoid function as
likelihood [4], so this model corresponds exactly with GP probit classification.
2
PK
? Ordinal (noisy) regression [8]: Corresponds to setting g(f ) = k=1 H(f ? bk ) and optionally setting ? 2 = 0. H(f ) is the Heaviside step function and bk are parameters defining
the widths and locations of the K bins in latent space.
? Maximum likelihood WGP [1]: Corresponds to setting g(f ) = w?1 (f ) and ? 2 = 0.
Because g(f ) is integrated out, all of the above models, and possibly many others, can be learned
using BWGP. We will see examples of problems requiring other likelihoods in Section 4. Thus, to
some extent, BWGP can be regarded as likelihood learning tool.
3 Variational inference for BWGP
Analytical inference in the BWGP model (1) is intractable. Instead of resorting to expensive Monte
Carlo methods, we will develop an efficient variational approximation of comparable computational
cost to that of WGP. We follow ideas discussed in [9] in order to gain tractability.
3.1 Augmented model
First, let us rewrite (1) instantiated only at the available observations y = [y1 . . . yn ]? . We omit
conditioning on inputs {xi }ni=1 and hyperparameters ?. We have
p(y|g) = N (y|g, ? 2 I)
p(g|f ) = N (g|f , Cf f )
p(f ) = N (f |?0 , K),
(2)
where f = [f1 . . . fn ]? is the latent function evaluated at the training inputs {x1 . . . xn } and g =
[g1 . . . gn ]? is the warping function evaluated at f . We use K to refer to the n ? n covariance matrix
of the latent function, with entries [K]ij = k(xi , xj ), whereas similarly [Cf f ]ij = c(fi , fj ) is the
n ? n warping covariance matrix. In general, we use [Cab ]ij = c(ai , bj ).
Now we proceed as in sparse GPs [10] and augment this model with a set of m inducing variables
u = [u1 . . . um ]? that correspond to evaluating function u(v) = g(v) ? v at some auxiliary values
v1 . . . vm . We can expand p(g|f ) by first conditioning on u to obtain p(g|u, f ), and then including
the prior p(u). This yields the augmented model
p(y|g) = N (y|g, ? 2 I)
p(u) = N (u|0, Cvv )
?1 ?
p(g|u, f ) = N (g|f + Cf v C?1
vv u, Cf f ? Cf v Cvv Cf v )
p(f ) = N (f |0, K)
(3a)
(3b)
Note that the original model (2) and the augmented model (3) are exactly identical, since we can
marginalise
R u out from (3) to get exactly (2). In other words, we introduced u in a consistent manner,
so that p(g|u, f )p(u)du = p(g|f ). The inclusion of the inducing variables does not change the
model, independently of their number m or their locations v1 . . . vm .
Inducing variables u have a physical interpretation in this model. Expressing the warping function
as g(v) = u(v) + v, the inducing variables correspond to evaluating GP u(v) at locations v1 . . . vm ,
which live in latent space (just as f does). Observe that u provides a probabilistic description of the
warping function. In particular, as m grows and the sampling in latent space becomes more and more
?
2
dense1 , the covariance Cf f ?Cf v C?1
vv Cf v gets closer to zero and p(g|u, f ) becomes a Dirac delta,
thus making the warping function deterministic given u, g(f ) = f + [c(f, v1 ) . . . c(f, vm )]C?1
vv u.
3.2 Variational lower bound
The exact posterior of BWGP model (3) is analytically intractable. We can proceed by selecting, within a given family of distributions, the approximate posterior q(g, u, f ) that minimises the
Kullback-Leibler (KL) divergence to the true posterior p(g, u, f |y). We can write
Z
p(y, g, u, f )
log p(y) ? log p(y) ? KL(q(g, u, f )||p(g, u, f |y)) = q(g, u, f ) log
dgdf du = F ,
q(g, u, f )
1
We can make m, which is the number of inducing inputs and associated inducing variables, as big as we
desire (and thus make the sampling arbitrarily dense), independently of the number of available samples n.
2
?
Note that Cf v C?1
om approximation to Cf f , whose quality grows with m.
vv Cf v is a Nystr?
3
where F is a variational lower bound on the evidence log p(y). Since log p(y) is constant for any
choice of q, it is obvious that maximising F wrt q yields the best approximation in the mentioned
KL sense within the considered family of distributions. We should choose a family that can model
the posterior as well as possible while keeping the computation of F tractable. If no constraints on
q are imposed, maximisation retrieves the exact posterior.
We expand q(g, u, f ) = q(g|u, f )q(u|f )q(f ) and constrain it as follows: q(f ) = N (f |?, ?),
q(u|f ) = q(u), q(g|u, f ) = p(g|u, f ). We argue that this constraints should still allow for a good
approximation: The exact posterior over f for any monotonic warping function is Gaussian (see
[1]), so it is reasonable to set q(f ) to be a Gaussian; GPs u(v) and f (x) are independent a priori
and encode different parts of the model, so it is reasonable to approximate them as independent a
posteriori q(u|f ) = q(u); and finally, given a dense sampling of the latent space (which is feasible,
since it is one-dimensional), p(g|u, f ) is virtually a Dirac delta, so conditioning on the observations
has no effect and we can set q(g|u, f ) = p(g|u, f ). Using the constrained expansion for q we get
Z
Z
p(u)
F (q(u), ?, ?) = q(u)q(f )
p(g|u, f ) log p(y|g)dg + log
df du ? KL(q(f )||p(f ))
q(u)
The inner integral yields
Z
n
1
?
2
p(g|u, f ) log p(y|g)dg = ? log(2?? 2 ) ? 2 {trace(Cf f ? Cf v C?1
vv Cf v ) + ||y ? f ||
2
2?
?1
? ?1 ?
? ?1 ?
? 2y? Cf v C?1
vv u + u Cvv Cf v Cf v Cvv u + 2u Cvv Cf v f },
which can be averaged analytically over q(f ) = N (f |?, ?). To this end, we define ?0 =
?
htrace(Cf f )iq(f ) , ?2 = hC?
f v Cf v iq(f ) , ?1 = hCf v iq(f ) , and ? 3 = hCf v f iq(f ) , which are
2
2
n ? 4 ? exp ? (vj ?vk ) ? ([?]i ?(vj +vk )/2)
X
g
4?2
2[?]ii +?2
p
?0 = n?g2
[?2 ]jk =
2
2[?]ii + ?
i=1
2
2
([?] ?vj )
2
n ? 2 ? exp ? ([?]i ?vj )2
?g2 ? exp ? 2([?]i ii +?
X
2)
g
2([?]ii +? ) ([?]i ? ? [?]ii vj )
p
p
[?1 ]ij =
[? 3 ]j =
.
[?]ii + ?2
([?]ii + ?2 )3
i=1
After averaging over q(f ), most of the terms do not depend on u and can be taken out of the integral.
The remaining terms which depend on u can be arranged as follows:
Z
?
?1
?1
1
?
p(u) exp(? 2?1 2 u? C?1
vv ?2 Cvv u + ?2 (y ?1 ? ? 3 )Cvv u)
q(u) log
du.
(4)
q(u)
Note that we have not specified any functional form for q(u), so any distribution over u is valid.
In particular, we want to choose q(u) so as to maximise (4), because that would be the choice that
maximises F (q(u), ?, ?). Inspecting (4), we notice that it has the form of a Jensen?s inequality
lower bound. The maximum wrt q(u) can then be obtained by reversing Jensen?s inequality:
Z
?
?1
?1
?
1
?
C
u
+
(y
?
?
?
)C
u
du
log p(u) exp ? 2?1 2 u? C?1
2
2
1
vv
vv
3
vv
?
=
1
1
|?2 + ? 2 Cvv | n
?
?
?
2
?1
(y
?
?
?
)(?
+
?
C
)
(?
y
?
?
)
?
log
+ log ? 2 ,
1
2
vv
3
1
3
2? 2
2
|Cvv |
2
which corresponds to selecting3 q ? (u) = N (u | Cvv ?, ? 2 Cvv (?2 + ? 2 Cvv )?1 Cvv ) with ? =
(?2 +? 2 Cvv )?1 (??
1 y ?? 3 ). Replacing one of the variational distributions within the bound by its
optimal value is sometimes referred to as using a ?marginalised variational bound? [11]. Grouping
all terms together, we finally obtain:
1
|?2 + ? 2 Cvv |
1
?1
2
(||y
?
?||
+
trace(?)
+
?
?
trace(?
C
))
?
log
0
2
vv
2? 2
2
|Cvv |
n
1
2
?1
(??
log 2? ? KL(N (?, ?)||N (?0 , K))
+ 2 (y? ?1 ? ? ?
3 )(?2 + ? Cvv )
1 y ? ?3) ?
2?
2
F BWGP (?, ?) = ?
3
?1
Using variational arguments, q ? (u) ? p(u) exp(? 2?1 2 u? C?1
vv ?2 Cvv u +
4
1
(y? ?1
?2
?1
? ??
3 )Cvv u).
This bound depends on ? and ?, i.e., n+n(n+1)/2 variational parameters which must be optimised.
Even for moderate sizes of n, this can be inconvenient. Following [12, 13], we can reduce the
number of free parameters by considering the conditions that must be met at any local maxima. By
(?,?)
imposing ?F??
= 0, we know that the posterior covariance can be expressed as ? = (K?1 +
?1
?) , for some diagonal matrix ?. With this definition, the bound F (?, ?) now depends only on
2n free variational parameters and can be computed in O(n3 ) time and O(n2 ) space, just as WGP.
3.3 Model selection
The gradients of the variational bound FBWGP (?, ?, ?) (now explicitly including its dependence on
the hyperparameters) can be computed analytically so it is possible to jointly optimise it both wrt to
the 2n free variational parameters and hyperparameters ? in order to simultaneously perform model
selection (by choosing the hyperparameters) and obtaining an accurate posterior (by choosing the
free variational parameters). The hyperparameters are the same as for a WGP that uses a single tanh
function, so no overfitting is expected, while still enjoying a completely flexible warping function.
3.4 Approximate predictive density
In order to use the proposed approximate posterior
to make predictions for a new test output y?
R
given input x? we need to compute q(y? |y) = p(y? |g? )p(g? |f? , u)q(u)p(f? |f )q(f )dg? df? dudf .
Integration wrt all variables can be computed analytically except for f? , resulting in
Z
q(y? |y) = q(y? |f? )q(f? |y)df?
?1
2
?1
with q(y? |f? ) = N (y? | f? + c? ?, ? 2 + c?? ? c?
c? ) and q(f? |y) =
? (Cvv + ? Cvv ?2 Cvv )
?
?
?1
2
2
N (f? |?? , ?? ), where ?? = ?0 + k? K (? ? ?0 1), ?? = k?? ? k? (K + ??1 )?1 k? , k? =
[k(x? , x1 ) . . . k(x? , xn )]? , k?? = k(x? , x? ), c? = [c(f? , f1 ) . . . c(f? , fn )]? , c?? = c(f? , f? ) = ?g2
and 1 is an appropriately sized vector of ones.
This latter one-dimensional integral can be computed numerically if needed, using Gaussian quadrature techniques. However, the posterior mean and variance can be computed analytically. Indeed,
Eq [y? |y] = ?? + ?1? ?
?
?
2
Vq [y? |y] = ?? (?2? ? ??
1? ?1? )? + 2(? 3? ? ?? ?1? )? + ??
?1
)
+ ? 2 + c?? ? trace(?2? (Cvv + ? 2 Cvv ??1
2 Cvv )
where ?? matrices are defined as their non-starred counterparts, but using ?? and ??2 instead of ?
and ? in their computation. In spite of this, the approximate posterior is not Gaussian in general.
4 Experiments
We will now investigate the behaviour of BWGP on several real regression and classification
datasets. In our experiments we will compare its performance with that of the original implementation4 of the maximum likelihood WGP model from [1]. In order to show the effect of varying
the complexity of the parametric warping function in WGP, we tested a 3 tanh model (the default,
used in the experiments from [1]) and a 20 tanh model, denoted as WGP3 and WGP20, respectively. We did our best to achieve the maximum accuracy in WGP, so in order to solve each data
split, we optimised its hyperparameters 5 times from a random initialisation (the implementation?s
default method) and 5 times more using a standard GP to initialise the underlying GP (and randomly
initialising the warping function). Out of the 10 total runs, we used the one achieving a higher evidence. The BWGP model was initialised from a standard GP and run only once per data split. The
standard ARD SE covariance function [4] plus noise was used for the underlying
GP in all models.
Pn?
The two measures that we use to compare performance are MSE = n1? i=1
(y?i ? Eq [y?i |y])2 and
Pn?
NLPD = ? n1? i=1
log q(y?i |y). In both cases, a lower value indicates better performance.
4
Available from http://www.gatsby.ucl.ac.uk/?snelson/.
5
4.1 Toy 1D data
First we evaluate the model on a simple one-dimensional toy problem. In order to generate a nonlinearly distorted signal, we round a sine function to the nearest integer and add Gaussian noise with
variance ? 2 = 2.5 ? 10?3 . The training set consists of 51 uniformly spaced samples between ??
and ?. We train a standard GP, WGP, and BWGP and then we test them on 401 uniformly spaced
samples in the same interval. Results are displayed on Fig. 1.
8
7
7
6
6
0
?0.5
p(y*|D) at x* = 0.4
p(y*|D) at x* = 0
0.5
y
BWGP
5
4
3
g(f)
0.5
?1
?1.5
4
3
2
2
1
1
?3
?2
?1
0
x
1
2
3
4
0
?2
?1
0
y*
1
0
?2
2
?1
?0.5
0
f
WGP3
0.5
1
1.5
1
?1
?4
0
?0.5
5
g(f) = w?1(f)
1
8
1
Training samples
GP
WGP3
BWGP
?1
0
y*
1
2
0.5
0
?0.5
?1
?8
?6
?4
?2
0
2
4
6
f
Figure 1: Left: Posterior mean for the proposed models. A dashed envelope encloses 90% posterior
mass for WGP, whereas a shading is used to show 90% posterior mass for BWGP. Middle: The
dotted line shows the true posterior at x = 0 and x = 0.4, which is much better modelled by BWGP.
Right: Warping functions inferred by each model.
The warping functions look reasonable for both models. For WGP it is a deterministic function,
the inverse of the strictly monotonic function w(y), so it can never achieve completely ?flat? zones.
Since WGP does not model output noise explicitly, these flat zones transfer and magnify output
noise to latent space, with the consequent degradation in performance. Note the extra spread of
the posterior mass in comparison with the actual training data, which is much better modelled by
BWGP. The mean of WGP fails to follow the flat regions at zero, behaving as a sine function, just
like the standard GP. The standard GP is also unable to handle this signal properly because of the
non-stationary smoothness: Abrupt changes are followed by constant levels. BWGP is able to deal
properly with noisy quantised signals and it is able to learn the implicit quantisation function.
4.2 Regression data sets
We now turn to the three real data sets originally used in [1] to assess WGP and for which it is
specially suited. These are: abalone [14] (4177 samples, 8 dimensions), creep [15, 16] (2066
samples, 30 dimensions), and ailerons [17] (7154 samples, 40 dimensions). As for the size of
the training set, the typical choice is to use 1000, 800 and 1000 samples respectively. For each
problem, we generated 60 splits by randomly partitioning data. Results are displayed on Table 1.
The warping functions inferred by BWGP are displayed in Fig. 3(a)-(c) and are almost identical to
those displayed in [1] for WGP. The shading represents 99.99% posterior mass.
Table 1: NMSE and NLPD figures for the compared methods on original data sets of [1].
MSE
NLPD
Model
abalone
creep
ail (?10?8 )
abalone
creep
ailerons
GP
BWGP
MLWGP3
MLWGP20
4.55?0.14
4.55?0.11
4.54?0.10
4.59?0.32
584.9?71.2
491.8?36.2
502.3?43.3
506.3?46.1
2.95?0.16
2.91?0.14
2.80?0.11
3.42?2.87
2.17?0.01
1.99?0.01
1.97?0.02
1.99?0.05
4.46?0.03
4.31?0.04
4.21?0.03
4.21?0.08
-7.30?0.01
-7.38?0.02
-7.44?0.01
-7.45?0.08
In terms of NLPD, BWGP always outperforms the standard GP, but it is in turn outperformed by
the maximum likelihood variants, which do not need to resort to any approximation to compute its
posterior. In terms of MSE, BWGP always performs better than WGP20 on these data sets, but only
performs better than WGP3 on the creep data set, which, on the other hand, is the one that seems
6
to benefit more from the use of a warping function. It seems that the additional flexibility of the
warping function in WGP20 is penalising its ability to generalise properly.
Upon seeing these results, we can conclude that WGP3 is already a good enough solution when
abundant training data are available and a simple warping function is required. This is reasonable:
The additional number of hyperparameters is small (only 9) and inference can be performed analytically. We can also see in Fig. 3(a)-(c) that the posterior over the warping functions is highly
peaked, so a maximum likelihood approach makes sense. However, performance might suffer when
the warping function becomes even slightly complex, as in creep, or when the number of available
data for training is very small (see the effect of the training set size on Fig. 2). In those cases, BWGP
is a safer option, since it will not overfit independently of the amount of data while allowing for a
highly flexible warping function.
abalone
ailerons
creep
1
GP
WGP3
BWGP
WGP20
10
GP
WGP3
BWGP
WGP20
GP
WGP3
BWGP
WGP20
?7
50
100
200
Number of training data
500
1000
Average MSE on 60 splits
Average MSE on 60 splits
Average MSE on 60 splits
10
3
10
50
100
200
Number of training data
500
800
50
100
200
Number of training data
500
1000
Figure 2: Average MSE, as well as estimated ?1 std. deviation of the average, for 60 splits.
4.3 Censored regression data sets
We will now modify the previous data sets so that they become more challenging. We will consider
that they have been censored, i.e., values that lie above or below some thresholds have been truncated. This is a realistic setting in the case of physical measurements (e.g., due to the limitation of
measuring devices), but clusters of values lying at the end of the range can appear in other cases. In
our experiments, we truncated the upper and lower 20% of the previous datasets, while keeping the
remaining 60% of data untouched. Note that the methods have no information about the existing
truncation or the used thresholds.
As discussed in [1], for this type of data, WGP tries to spread the samples in latent space by using a
very sharp warping function and this causes the model problems. Additionally, the computation of
the NLPD becomes erroneous due to numerical problems, with some of the tanh functions becoming very close to sign functions. None of these problems were experienced by BWGP, which still
works significantly better than a standard GP on this type of problems, see Table 2. The corresponding warping functions are displayed on Figs. 3.(e)-(g).
Table 2: NMSE and NLPD figures for the compared methods on censored data sets.
MSE
NLPD
Model
abalone
creep
ail (?10?8 )
abalone
creep
ailerons
GP
BWGP
WGP3
WGP20
1.27?0.12
1.27?0.12
1.40?0.31
1.38?0.22
339.5?29.2
276.8?26.8
434.6?169.0
382.1?93.4
1.20?0.12
1.18?0.12
1.83?2.18
1.39?0.78
1.54?0.05
0.74?0.36
?
?
4.22?0.04
3.68?0.17
?
?
-7.70?0.05
-7.89?0.07
?
?
4.4 Classification data sets
Classification can be regarded as an extreme case of censoring or quantisation of a regression data
set. We also mentioned in Section 2.2 that the (conditional) generative model of GP classification
7
Abalone
20
Ailerons
?3
Creep
25
x 10
german
600
0
500
?0.5
1
400
?1
0.5
1.5
15
300
g(f)
g(f)
g(f)
g(f)
10
?1.5
0
5
?2
200
?0.5
0
?10
?2.5
100
?5
?15
?10
?5
0
5
0
10
?200
?100
0
f
100
200
?1
?3
300
?20
?15
?10
(b) creep (reg)
?1.5
5
240
?4
2
220
?5
1.5
200
1
180
0.5
160
?0.5
0
f
x 10
0.5
1
(d) german (class)
Ailerons
?4
2.5
?1
?4
x 10
(c) ailerons (reg)
Creep
Abalone
0
f
f
(a) abalone (reg)
?5
titanic
1.5
1
?6
140
120
g(f)
g(f)
g(f)
g(f)
0
?0.5
0.5
?7
?8
0
?9
100
?1
?4
?3
?2
?1
0
1
2
3
4
5
60
?1
?11
80
?1.5
?2
?0.5
?10
?100
?50
0
50
?12
100
?3
?2
(e) abalone (cens)
?1
0
1
f
f
f
(f) creep (cens)
2
3
4
5
?1.5
?0.8
?0.6
?4
?0.4
?0.2
0
0.2
0.4
0.6
0.8
f
x 10
(g) ailerons (cens) (h) titanic (class)
Figure 3: Inferred warping functions.
Table 3: Error rates (in percentage) for the proposed model on the benchmark from R?atsch [18].
GP
BWGP
GPC
ban
bre
dia
fla
ger
hea
ima
rin
spl
thy
tit
two
wav
13.2
10.7
10.6
29.6
29.5
29.5
28.0
24.5
24.2
39.1
33.3
33.5
27.6
23.9
24.8
28.6
23.5
21.7
03.2
02.1
02.1
21.1
04.8
07.9
23.4
17.0
22.8
13.7
04.7
04.0
23.6
22.0
22.2
10.1
04.2
04.2
15.5
12.4
11.4
could be seen as a particular selection for g(f ). So we decided to test the BWGP model on the 13
classification data sets from R?atsch benchmark [18].
Since WGP does not produce any meaningful results on this type of data, as mentioned in [1],
we did not include it in the comparison. Instead, we used a standard GP classifier (GPC) using a
probit likelihood and expectation propagation for approximate inference. We measured the error
rate, which is the performance figure we are interested in for those data sets, averaging over 10 splits
of the data. Results from Table 3 show that BWGP is able to match and occasionally exceed the
performance of GPC, outperforming in all cases the standard GP. The learned warping functions
look similar for the different data sets. We have depicted two typical cases in Figs. 3.(d) and 3.(h).
Specially good results are obtained for german, ringnorm, and splice, though we are aware
than even better results can be obtained by using an isotropic SE covariance on these data sets [19].
5 Discussion and further work
In this work we have shown how it is possible to variationally integrate out the warping function
from warped GPs. This is useful to overcome the limitations of maximum likelihood warped GPs,
namely: To work in the low data sample regime; to handle censored observations and classification
data; to explicitly model output noise; and to allow for warping functions of unlimited flexibility,
which may include flat regions. The experiments demonstrate the improved robustness of the BWGP
model, which is able to operate properly in a much wider set of scenarios. While a specific model
(should it exist) will generally be a better tool for a specific task (e.g., GPC for classification), BWGP
behaves as a Swiss Army knife providing good performance on general tasks.
In addition to the tasks discussed in this work, there are other cases in which BWGP can be of
immediate application. One example is ordinal regression [8], where the locations and widths of
the bins can be integrated out instead of selected. Another potential future application is within the
popular field of copulas [20, 21, 22, 23], since they routinely resort to fixed warpings of GPs.
Acknowledgments
MLG is grateful to Michalis K. Titsias and the anonymous reviewers for helpful comments.
8
References
[1] E. Snelson, Z. Ghahramani, and C. Rasmussen. Warped Gaussian processes. In Advances in Neural
Information Processing Systems 16, 2003.
[2] C. E. Rasmussen. Evaluation of Gaussian Processes and other Methods for Non-linear Regression. PhD
thesis, University of Toronto, 1996.
[3] M.N. Gibbs. Bayesian Gaussian Processes for Regression and Classification. PhD thesis, University of
Cambridge, 1997.
[4] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. Adaptive Computation
and Machine Learning. MIT Press, 2006.
[5] M.N. Schmidt. Function factorization using warped gaussian processes. In Proc. of the 26th International
Conference on Machine Learning, pages 21?928. Omnipress, 2009.
[6] Y. Zhang and D.-Y Yeung. Multi-task warped gaussian process for personalized age estimation. In IEEE
Conf. on Computer Vision and Pattern Recognition, pages 2622?2629.
[7] C.K.I. Williams and C.E. Rasmussen. Gaussian processes for regression. In Advances in Neural Information Processing Systems 8. MIT Press, 1996.
[8] W. Chu and Z. Ghahramani. Gaussian processes for ordinal regression. Journal of Machine Learning
Research, 6:1019?1041, 2005.
[9] M.K. Titsias and N.D. Lawrence. Bayesian Gaussian process latent variable model. In Proc. of the
13th International Workshop on Artificial Intelligence and Statistics, volume 9 of JMLR: W&CP, pages
844?851, 2010.
[10] M.K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In Proc. of the 12th
International Workshop on Artificial Intelligence and Statistics, 2009.
[11] M. L?azaro-Gredilla and M. Titsias. Variational heteroscedastic Gaussian process regression. In 28th
International Conference on Machine Learning (ICML-11), pages 841?848, New York, NY, USA, June
2011. ACM.
[12] M. Opper and C. Archambeau. The variational Gaussian approximation revisited. Neural Computation,
21(3):786?792, 2009.
[13] M.K. Titsias A.C. Damianou and N.D. Lawrence. Variational gaussian process dynamical systems. In
Advances in Neural Information Processing System 25. IEEE Conf. publications, 2011.
[14] A. Frank and A. Asuncion. UCI machine learning repository, 2010. http://archive.ics.uci.
edu/ml University of California, Irvine, School of Information and Computer Sciences.
[15] Materials algorithms project (MAP) program and data library. http://www.msm.cam.ac.uk/
map/map.html.
[16] D. Cole, C. Martin-Moran, A. G. Sheard, H. K. D. H. Bhadeshia, and D. J. C. MacKay. Modelling creep
rupture strength of ferritic steel welds. Science and Technology of Welding and Joining, 5:81?90, 2000.
[17] L. Torgo. http://www.liacc.up.pt/?ltorgo/Regression/.
[18] G. R?atsch, T. Onoda, and K.-R. M?uller. Soft margins for AdaBoost. Machine Learning, 42(3):287?
320, 2001. http://people.tuebingen.mpg.de/vipin/www.fml.tuebingen.mpg.de/
Members/raetsch/benchmark.1.html.
[19] A. Naish-Guzman and S. Holden. The generalized FITC approximation. In Advances in Neural Information Processing Systems 20, pages 1057?1064. MIT Press, 2008.
[20] R.B. Nelsen. An Introduction to Copulas. Springer, 1999.
[21] P.X.-K. Song. Multivariate dispersion models generated from Gaussian copula. Scandinavian Journal of
Statistics, 27(2):305?320, 2000.
[22] A. Wilson and Z. Ghahramani. Copula processes. In Advances in Neural Information Processing Systems
23, pages 2460?2468. MIT Press, 2010.
[23] F.L. Wauthier and M.I. Jordan. Heavy-tailed process priors for selective shrinkage. In Advances in Neural
Information Processing Systems 23. MIT Press, 2010.
9
| 4494 |@word repository:1 middle:1 version:1 seems:2 covariance:7 nystr:1 shading:2 selecting:1 initialisation:1 outperforms:1 existing:1 chu:1 must:2 fn:2 numerical:2 realistic:1 stationary:1 generative:1 selected:1 device:1 intelligence:2 isotropic:1 provides:1 complication:1 location:4 toronto:1 revisited:1 zhang:1 direct:1 become:1 consists:1 manner:1 introduce:1 thy:1 indeed:1 expected:1 mpg:2 multi:1 actual:1 considering:1 becomes:5 spain:1 project:1 underlying:2 cens:3 mass:5 ail:2 transformation:4 thorough:1 exactly:4 um:1 classifier:1 uk:2 partitioning:1 enjoy:1 yn:1 omit:1 appear:1 maximise:1 local:1 modify:1 consequence:1 joining:1 optimised:2 becoming:1 might:3 plus:1 challenging:1 heteroscedastic:1 ringnorm:1 archambeau:1 factorization:1 range:1 averaged:1 decided:1 acknowledgment:1 maximisation:1 swiss:1 dudf:1 significantly:1 word:1 integrating:1 seeing:1 spite:1 get:3 cannot:3 close:1 selection:5 encloses:1 live:1 www:4 equivalent:1 map:4 deterministic:2 imposed:1 reviewer:1 williams:2 independently:3 abrupt:1 regarded:3 initialise:1 handle:3 target:1 pt:1 exact:4 gps:9 us:1 expensive:1 jk:1 recognition:1 std:1 observed:2 region:5 wgp:26 trade:1 mentioned:4 complexity:1 cam:1 neglected:1 depend:2 rewrite:1 grateful:1 tit:1 torgo:1 predictive:1 titsias:5 upon:1 rin:1 completely:2 liacc:1 routinely:1 retrieves:1 bhadeshia:1 train:1 instantiated:1 shortcoming:1 monte:1 artificial:2 choosing:3 whose:1 solve:2 distortion:1 ability:1 statistic:3 g1:1 gp:36 jointly:1 noisy:4 advantage:2 analytical:2 ucl:1 remainder:1 relevant:1 uci:2 starred:1 flexibility:3 achieve:2 magnify:1 description:1 inducing:7 dirac:2 cluster:1 produce:1 nelsen:1 wider:2 iq:4 develop:1 ac:2 minimises:1 miguel:2 ard:1 nearest:1 measured:1 ij:4 school:1 eq:2 auxiliary:1 met:1 material:1 bin:2 behaviour:1 f1:2 clustered:1 anonymous:1 inspecting:1 strictly:1 lying:1 considered:1 ic:1 normal:2 exp:7 lawrence:2 mapping:1 bj:1 estimation:1 proc:3 outperformed:1 tanh:6 cole:1 tool:2 uller:1 mit:5 gaussian:29 always:2 pn:2 shrinkage:1 varying:1 wilson:1 publication:1 encode:1 derived:1 june:1 properly:5 vk:2 modelling:1 likelihood:17 indicates:1 contrast:1 sense:2 posteriori:1 inference:5 helpful:1 ferritic:1 integrated:2 holden:1 expand:2 selective:1 interested:1 classification:13 among:1 aforementioned:1 augment:1 priori:1 flexible:2 denoted:1 html:2 art:1 constrained:1 copula:5 integration:1 mackay:1 field:1 once:1 never:1 having:1 aware:1 sampling:3 identical:2 represents:1 look:2 icml:1 peaked:1 future:1 others:2 guzman:1 few:1 randomly:2 dg:3 simultaneously:1 divergence:1 ima:1 n1:2 investigate:1 highly:2 evaluation:1 extreme:1 accurate:1 integral:3 closer:1 censored:5 enjoying:1 abundant:1 inconvenient:1 soft:1 gn:1 measuring:1 tractability:2 cost:1 deviation:1 entry:1 too:2 density:2 international:4 probabilistic:3 universidad:1 off:1 vm:4 enhance:1 together:1 squared:1 thesis:2 ltorgo:1 choose:2 possibly:2 conf:2 warped:13 resort:2 toy:2 account:1 potential:1 de:3 explicitly:3 depends:2 sine:2 performed:1 try:1 closed:1 doing:1 carlos:1 option:1 asuncion:1 ass:1 om:1 ni:3 accuracy:1 variance:2 correspond:2 identify:1 yield:3 spaced:2 modelled:3 bayesian:11 none:2 carlo:1 worth:1 damianou:1 definition:3 mlg:1 initialised:1 obvious:1 associated:2 gain:1 irvine:1 treatment:1 popular:1 penalising:1 variationally:3 bre:1 uc3m:1 higher:1 originally:1 follow:2 unambiguously:1 adaboost:1 improved:1 arranged:1 evaluated:2 though:3 just:4 implicit:1 overfit:1 hand:2 replacing:1 nonlinear:5 propagation:1 fla:1 quality:1 grows:2 usa:1 effect:3 requiring:1 true:2 counterpart:1 analytically:7 leibler:1 deal:2 round:1 width:2 vipin:1 abalone:10 generalized:1 tsc:1 demonstrate:2 performs:2 cp:1 fj:1 omnipress:1 variational:17 snelson:2 fi:1 superior:1 common:1 sigmoid:1 behaves:1 functional:1 physical:2 conditioning:3 volume:1 untouched:1 discussed:4 interpretation:1 numerically:1 raetsch:1 refer:1 composition:1 expressing:1 measurement:1 imposing:1 ai:1 gibbs:1 smoothness:1 rd:1 cambridge:1 resorting:1 similarly:1 inclusion:1 fml:1 scandinavian:1 longer:1 behaving:1 quantisation:2 etc:1 add:1 posterior:19 multivariate:1 moderate:1 scenario:3 occasionally:1 inequality:2 outperforming:1 arbitrarily:1 yi:5 seen:2 creep:13 additional:5 impose:1 signal:4 ii:7 dashed:1 match:1 knife:1 dept:1 prediction:1 variant:1 regression:17 noiseless:1 expectation:1 df:3 vision:1 yeung:1 sometimes:1 whereas:3 want:1 addition:1 interval:1 appropriately:1 extra:2 envelope:1 specially:2 operate:1 archive:1 comment:1 elegant:1 virtually:1 member:1 seem:1 jordan:1 integer:1 exceed:1 iii:1 split:8 enough:1 naish:1 xj:1 inner:1 idea:3 reduce:1 expression:1 distorting:2 song:1 suffer:1 proceed:2 cause:1 york:1 remark:1 useful:1 gpc:4 se:2 involve:1 generally:1 amount:1 http:5 generate:1 percentage:1 wav:1 exist:1 notice:2 dotted:1 sign:2 delta:2 estimated:1 per:1 discrete:1 hyperparameter:1 write:1 threshold:2 achieving:2 v1:4 sum:1 run:3 inverse:4 quantised:1 distorted:1 place:3 family:3 reader:1 reasonable:4 almost:1 spl:1 initialising:1 comparable:1 bound:9 followed:1 strength:1 constraint:3 infinity:1 constrain:1 n3:1 flat:8 personalized:1 unlimited:1 weld:1 u1:1 argument:1 concluding:1 martin:1 gredilla:2 slightly:1 making:1 explained:1 restricted:1 taken:1 vq:1 turn:2 german:3 wrt:4 ordinal:4 know:1 needed:1 tractable:1 end:2 dia:1 available:6 observe:1 hcf:2 schmidt:1 robustness:1 original:3 remaining:2 cf:21 include:2 michalis:1 ghahramani:3 disappear:1 warping:47 implied:1 already:3 parametric:7 dependence:1 diagonal:1 gradient:1 wrap:1 reversed:1 unable:1 wauthier:1 clarifying:1 hea:1 argue:1 collected:1 unstable:1 extent:1 reason:2 tuebingen:2 maximising:1 nlpd:7 relationship:2 providing:2 optionally:1 frank:1 trace:4 steel:1 implementation:1 perform:1 maximises:1 allowing:1 upper:1 observation:8 dispersion:1 datasets:2 benchmark:3 displayed:5 truncated:2 immediate:1 defining:2 communication:1 y1:1 arbitrary:3 sharp:1 inferred:3 bk:2 introduced:1 nonlinearly:1 required:1 kl:5 specified:1 namely:1 marginalise:1 california:1 learned:3 able:6 below:2 pattern:1 dynamical:1 regime:2 program:1 built:1 including:2 optimise:1 treated:1 marginalised:1 fitc:1 technology:1 titanic:2 library:1 rupture:1 prior:14 probit:2 limitation:2 organised:1 ger:1 msm:1 age:1 integrate:3 consistent:1 heavy:1 censoring:1 ban:1 keeping:2 free:4 truncation:1 rasmussen:4 allow:2 vv:13 generalise:1 sparse:2 benefit:2 overcome:1 default:3 xn:2 valid:3 evaluating:2 dimension:3 opper:1 sheard:1 commonly:1 adaptive:1 welding:1 approximate:7 kullback:1 ml:1 overfitting:3 unnecessary:1 conclude:1 xi:6 continuous:1 latent:14 tailed:1 table:6 additionally:1 learn:2 transfer:1 onoda:1 obtaining:1 du:5 expansion:1 hc:1 mse:8 complex:1 vj:5 did:2 pk:1 dense:2 spread:2 big:1 noise:11 hyperparameters:8 arise:1 n2:1 quadrature:1 x1:2 augmented:3 fig:6 referred:2 nmse:2 madrid:1 fashion:1 gatsby:1 ny:1 fails:1 experienced:1 exponential:1 lie:1 jmlr:1 splice:1 erroneous:1 specific:2 jensen:2 moran:1 consequent:1 evidence:5 grouping:1 intractable:3 workshop:2 cab:1 phd:2 margin:1 suited:1 depicted:1 azaro:2 army:1 failed:1 desire:1 expressed:1 g2:4 scalar:1 cvv:25 monotonic:3 springer:1 corresponds:7 acm:1 cdf:1 conditional:1 identity:1 sized:1 feasible:1 change:2 safer:1 included:2 typical:2 except:1 uniformly:2 reversing:1 averaging:2 degradation:1 called:1 total:1 e:1 meaningful:1 atsch:3 zone:2 people:1 latter:1 aileron:8 evaluate:1 heaviside:1 reg:3 tested:1 avoiding:1 |
3,860 | 4,495 | Active Comparison of Prediction Models
Christoph Sawade, Niels Landwehr, and Tobias Scheffer
University of Potsdam
Department of Computer Science
August-Bebel-Strasse 89, 14482 Potsdam, Germany
{sawade, landwehr, scheffer}@cs.uni-potsdam.de
Abstract
We address the problem of comparing the risks of two given predictive
models?for instance, a baseline model and a challenger?as confidently as possible on a fixed labeling budget. This problem occurs whenever models cannot
be compared on held-out training data, possibly because the training data are unavailable or do not reflect the desired test distribution. In this case, new test instances have to be drawn and labeled at a cost. We devise an active comparison
method that selects instances according to an instrumental sampling distribution.
We derive the sampling distribution that maximizes the power of a statistical test
applied to the observed empirical risks, and thereby minimizes the likelihood of
choosing the inferior model. Empirically, we investigate model selection problems on several classification and regression tasks and study the accuracy of the
resulting p-values.
1
Introduction
We address situations in which an informed choice between candidate predictive models?for instance, a baseline method and a challenger?has to be made. In practice, it is not always possible to
compare the models? risks on held-out training data. For example, in computer vision it is common
to acquire pre-trained object or face recognizers from third parties. Such recognizers do not typically come with the image databases that have been used to train them. The suppliers of the models
could provide risk estimates based on held-out training data; however, such estimates might be biased because the training data would not necessarily reflect the distribution of images the deployed
models will be exposed to. Another example are domains where the input distribution changes
over a period of time in which a baseline model, e.g., a spam filter, has been employed. By the
time a new predictive model is considered, a previous risk estimate of the baseline model may no
longer be accurate.
In these example scenarios, new test data have to be drawn and labeled. The standard approach
to comparing models would be to draw n test instances according to the test distribution which
the model is exposed to in practice, label these data, and calculate the difference of the empirical
?n
? n and the sample variance Sn2 . Then, under the null hypothesis of identical risks, ?n ?
risks ?
Sn is
asymptotically governed by a standard normal distribution, and we can compute a p-value which
quantifies the likelihood that an observed empirical difference is due to chance, indicating how
confidently the decision to prefer the apparently better model can be made.
In many application scenarios, unlabeled test instances are readily available whereas the process
of labeling data is costly. We study an active model comparison process that, in analogy to active
learning, selects instances from a pool of unlabeled test data and queries their labels. Instances
are selected according to an instrumental sampling distribution q. The empirical difference of the
models? risks is weighted appropriately to compensate for the discrepancy between instrumental
and test distributions which leads to consistent?that is, asymptotically unbiased?risk estimates.
1
The principal theoretical contribution of this paper is the derivation of a sampling distribution q that
allows us to make the decision to prefer the superior model as confidently as possible given a fixed
labeling budget n, if one of the models is in fact superior. Equivalently, one may use q to minimize
the labeling costs n required to reach a correct decision at a prescribed level of confidence.
The active comparison problem that we study can be seen as an extreme case of active learning, in
which the model space contains only two (or, more generally, a small number of) models. For the
special case of classification with zero-one loss and two models under study, a simplified version
of the sampling distribution we derive coincides with the sampling distribution used in the A 2 and
IWAL active learning algorithms proposed by Balcan et al. [1] and Beygelzimer et al. [2]. For A 2
and IWAL , the derivation of this distribution is based on finite-sample complexity bounds, while in
our approach, it is based on maximizing the power of a statistical test comparing the models under
study. The latter approach has the advantage that it directly generalizes to regression problems. A
further difference to active learning is that our goal is not only to choose the best model, but also to
obtain a well-calibrated p-value indicating the confidence with which this decision can be made.
Our method is also related to recent work on active data acquisition strategies for the evaluation
of a single predictive model, in terms of standard risks [8] or generalized risks that subsume precision, recall, and f-measure [9]. The problem addressed in this paper is different in that we seek
to assess the relative performance of two models, without necessarily determining absolute risks
precisely. Madani et al. have studied active model selection, where the goal is also to identify a
model with lowest risk [5]. However, in their setting costs are associated with obtaining predictions
y? = f (x), while in our setting costs are associated with obtaining labels y ? p(y|x). Hoeffding
races [6] and sequential sampling algorithms [10] perform efficient model selection by keeping
track of risk bounds for candidate models and removing models that are clearly outperformed from
consideration. The goal of these methods is to reduce computational complexity, not labeling effort.
The rest of this paper is organized as follows. The problem setting is laid out in Section 2. Section 3
derives the instrumental distribution and details our theoretical findings. Section 4 explores active
model comparison experimentally. Section 5 concludes.
2
Problem Setting
Let X denote the feature space and Y the label space; an unknown test distribution p(x, y) is defined
over X ? Y. Let p(y|x; ?1 ) and p(y|x; ?2 ) be given ?-parameterized models of p(y|x) and let
fj : X ? Y with fj (x) = arg maxy p(y|x; ?j ) be the corresponding predictive functions.
The risks of f1 , f2 are given by
ZZ
R[fj ] =
`(fj (x), y)p(x, y)dy dx
(1)
for a loss function ` : Y ? Y ? R. In a classification setting, the integral over Y reduces to a sum.
The standard approach to comparing models is to compare empirical risk estimates
n
X
? n [fj ] = 1
R
`(fj (xi ), yi ),
n i=1
(2)
where n test instances (xi , yi ) are drawn from p(x, y) = p(x)p(y|x). We assume that unlabeled
data are readily available, but acquiring labels y for selected instances x according to p(y|x) is a
costly process that may involve a query to a human labeler.
Test instances need not necessarily be drawn according to the input distribution p(x). We will focus
on a data labeling process that draws test instances according to an instrumental distribution q(x)
rather than p(x). Intuitively, q(x) should be designed such as to prefer instances that highlight
differences between the models f1 and f2 . Let q(x) denote an instrumental distribution with the
property that p(x) > 0 implies q(x) > 0 for all x ? X . A consistent risk estimate is then given by
n
X
p(xi )
? n,q [fj ] = 1
R
`(fj (xi ), yi ),
W i=1 q(xi )
2
(3)
Pn
p(xi )
i)
where (xi , yi ) ? q(x)p(y|x) and W = i=1 p(x
q(xi ) . Weighting factors q(xi ) compensate for the
discrepancy between test and instrumental distribution, and the normalizer is the sum of weights.
Because of the weighting factors, Equation 3 defines a consistent risk estimate (see [4], Chapter 2).
? n,q [fj ] converges to the true risk R[fj ] for n ? ?.
Consistency means that the expected value of R
? n,q [f1 ] and R
? n,q [f2 ], the difference ?
? n,q = R
? n,q [f1 ]? R
? n,q [f2 ] provides evidence
Given estimates R
?
on which model is preferable; a positive ?n,q argues in favor of f2 . In preferring one model over the
? n,q is only a random effect, and
other, one rejects the null hypothesis that the observed difference ?
? n,q is asymptotically zero.
R[f1 ] = R[f2 ] holds. The null hypothesis implies that the mean of ?
?
Because ?n,q is asymptotically normally distributed (see, e.g., [3]), it further implies that the statistic
? n,q
? ?
n
? N (0, 1)
?n,q
2
? n,q ] denotes the variance
is asymptotically standard-normally distributed, where n1 ?n,q
= Var[?
2
2
?
of ?n,q . In practice, ?n,q is unknown. A consistent estimator of ?n,q is given by
2
Sn,q
n
2
1 X p(xi )2
? n,q ,
`(f
(x
),
y
)
?
`(f
(x
),
y
)
?
?
=
1
i
i
2
i
i
W i=1 q(xi )2
(4)
as shown, for example, by Geweke [3]. Substituting the empirical for the true standard deviation
? ?
? n,q
2
2
, the null hypothesis
consistently estimates ?n,q
. Because Sn,q
yields an observable statistic n Sn,q
also implies that the observable statistic is asymptotically standard normally distributed,
? n,q
? ?
n
? N (0, 1).
Sn,q
Let ? denote the cumulative distribution function of the standard normal distribution. Then,
!!
? n,q |
? |?
n
2 1??
Sn,q
(5)
is called the p-value of a two-sided paired Wald test (see, e.g., [12], Chapter 10). The p-value quantifies the likelihood of observing the given absolute value of the test statistic, or a higher value, by
chance under the null hypothesis. Student?s t-distribution can serve as a more popular approximation
of the distribution of a test statistic under the null hypothesis, resulting in the common t-test. Note,
however, that Sn,q would have to be a sum of squared, normally distributed random variables for the
test statistic to be asymptotically governed by the t-distribution. This assumption is reasonable for
regression, but not for classification, and only for the case of p = q.
If the null hypothesis does not hold and the two models incur different risks, the distribution of the
test statistic depends on the chosen sampling distribution q(x). Our goal is to find a distribution q(x)
that allows us to tell the risks of f1 and f2 apart with high confidence. More formally, the power of
a test when sampling from q(x) is the likelihood that the null hypothesis can be rejected, that is, the
likelihood that the p-value falls below a pre-specified confidence threshold ?. Our goal is to find the
sampling distribution q that maximizes test power:
!!
!
? n,q |
? |?
?
q = arg max p 2 1 ? ?
n
?? .
(6)
Sn,q
q
3
Active Model Comparison
We now turn towards deriving an optimal sampling distribution q ? according to Equation 6. Section 3.1 analytically derives an asymptotically optimal sampling distribution. Section 3.2 discusses
the sampling distribution in a pool-based setting and presents the active comparison algorithm.
3
3.1
Asymptotically Optimal Sampling
Let ? = R[f1 ] ? R[f2 ] denote the true risk difference, and assume ? 6= 0. Given a confidence
threshold ?, the test power equals the probability that
the absolute value of the test statistic exceeds
the corresponding critical value z? = ??1 1 ? ?2 :
!
!
!
? n,q |
? n,q |
? |?
? |?
n
n
p 2 ? 2?
?? =p
? z? .
(7)
Sn,q
Sn,q
Asymptotically, it holds that
?
? n,q ? ?)
n(?
? N (0, 1).
?n,q
Since Sn,q consistently estimates ?n,q , it follows that for large n the statistic
?
distributed with mean
n?
?n,q
?
?
?
n,q
is normally
n Sn,q
and unit variance,
?
? ?
n?n,q
n?
?N
,1 .
Sn,q
?n,q
Equation 8 implies that the absolute value
?
?
n
? n,q |
|?
Sn,q
(8)
of the test statistic follows a folded normal dis-
n?
tribution with location parameter ?n,q and scale parameter one. According to Equation 7, test power
can thus be approximated in terms of the cumulative distribution of this folded normal distribution,
!
!
Z z? ?
? n,q |
? |?
n?
p 2 ? 2?
n
?? ?1?
f T;
, 1 dT,
(9)
Sn,q
?n,q
0
where
1
1
1
1
2
2
?
?
f (T ; ?, 1) =
exp ? (T + ?) +
exp ? (T ? ?)
2
2
2?
2?
denotes the density of a folded normal distribution with location parameter ? and scale parameter
one. We define the shorthand
Z z? ?
n?
?n,q = 1 ?
f T;
, 1 dT
?n,q
0
for the approximation of test power given by Equation 9. In the following, we derive a sampling distribution maximizing ?n,q , thereby approximately solving the optimization problem of Equation 6.
Theorem 1 (Optimal Sampling Distribution). Let ? = R[f1 ] ? R[f2 ] with ? 6= 0. The distribution
sZ
q ? (x) ? p(x)
2
(`(f1 (x), y) ? `(f2 (x), y) ? ?) p(y|x)dy
asymptotically maximizes ?n,q ; that is, for any other sampling distribution q 6= q ? it holds that
?n,q < ?n,q? for sufficiently large n.
Before we prove Theorem 1, we show that a sampling distribution asymptotically maximizes ?n,q if
? n,q .
and only if it minimizes the asymptotic variance of the estimator ?
Lemma 2 (Variance Optimality). Let q, q 0 denote two sampling distributions. Then it holds that
?n,q > ?n,q0 for sufficiently large n if and only if
h
i
h
i
? n,q < lim n Var ?
? n,q0 .
lim n Var ?
(10)
n??
n??
A proof is included in the online appendix. Lemma 2 shows that in order to solve the optimization
problem given by Equation 6, we need to find the sampling distribution minimizing the asymptotic
? n,q . This asymptotic variance is characterized by the following Lemma.
variance of the estimator ?
4
? n,q ] of ?
? n,q is
Lemma 3 (Asymptotic Variance). The asymptotic variance ?q2 = lim n Var[?
n??
given by
ZZ
p(x)2
2
?q2 =
(`(f1 (x), y) ? `(f2 (x), y) ? ?) p(y|x)q(x)dy dx.
q(x)2
A proof of Lemma 3 is included in the online appendix.
Proof of Theorem 1. We can now prove Theorem 1 by deriving the distribution q ? that minimizes the
asymptotic variance
?q2 as given by Lemma 3. We minimize the functional ?q2 in terms of q under
R
the constraint q(x)dx = 1 using a Lagrange multiplier ?.
Z
Z
c(x)
L [q, ?] = ?q2 + ?
q(x)dx ? 1 =
+ ? (q(x) ? p(x)) dx
q(x)
R
2
where c(x) = p(x)2 (`(f1 (x), y) ? `(f2 (x), y) ? ?) p(y|x)dy. The optimal point for the constrained problem satisfies the Euler-Lagrange equation
?
c(x)
c(x)
+ ? = 0.
(11)
+ ? (q(x) ? p(x)) = ?
?q(x) q(x)
q(x)2
A solution for Equation 11 with respect to the normalization constraint is given by
p
c(x)
?
q (x) = R p
.
(12)
c(x)dx
Resubstitution of c(x) into Equation 12 implies the theorem.
3.2
Empirical Sampling Distribution
The distribution q ? also depends on the true conditional p(y|x) and the true difference in risks ?.
In order to implement the method, we have to approximate these quantities. Note that as long as
p(x) > 0 implies q(x) > 0, any choice of q will yield consistent risk estimates because weighting
factors account for the discrepancy between sampling and test distribution (Equation 3). That is,
? n,q is guaranteed to converge to ? as n grows large; any approximation employed to compute q ?
?
will only affect the number of test examples required to reach a certain level of estimation accuracy. To approximate the true conditional p(y|x), we use the given predictive models p(y|x; ?1 ) and
p(y|x; ?2 ), and assume a mixture distribution giving equal weight to both models:
1
1
(13)
p(y|x) ? p(y|x; ?1 ) + p(y|x; ?2 ).
2
2
The risk difference ? is replaced by a difference ?? of introspective risks calculated from Equa1
tion 1, where the integral over X is replaced by a sum over the pool, p(x) = m
, and p(y|x) is
approximated by Equation 13.
We will now derive the empirical sampling distribution for two standard loss functions.
Derivation 4 (Sampling for Zero-one Loss). Let ` be the zero-one loss for a binary prediction
problem with label space Y = {0, 1}. When p(y|x) is approximated as in Equation 13, the sampling
distribution asymptotically maximizing ?n,q in a pool-based setting resolves to
?
|?? |
: f1 (x) = f2 (x)
?
?
?q
2
q ? (x) ? q1 ? 2?? (1 ? 2p(y = 1|x; ?)) + ?? : f1 (x) > f2 (x)
?
?
? 1 + 2? (1 ? 2p(y = 1|x; ?)) + ? 2 : f (x) < f (x)
?
?
1
2
for all x ? D.
A proof is included in the online appendix. Instead of using Approximation 13, an uninformative
approximation p(y = 1|x) ? 0.5 may be used. In this case q ? degenerates to uniform sampling from
the subset of the pool where f1 (x) 6= f2 (x). We denote this baseline as active6= . This baseline
coincides with the A 2 as well as the IWAL active learning algorithms, applied to the model space
{f1 , f2 }, as can be seen from inspection of Algorithm 1 in [1] and Algorithms 1 and 2 in [2].
We now derive the optimal sampling distribution for regression problems with a squared loss function, assuming that the predictive distributions p(y|x; ?1 ) and p(y|x; ?2 ) are Gaussian:
5
Algorithm 1 Active Model Comparison
input Models f1 , f2 with distributions p(y|x; ?1 ), p(y|x; ?2 ); pool D; labeling budget n.
1: Compute sampling distribution q ? (Derivation 4 or 5).
2: for i = 1, . . . , n do
3:
Draw xi ? q ? (x) from D with replacement.
4:
Query label yi ? p(y|xi ) from oracle.
5: end for
? n,q [f1 ] and R
? n,q [f2 ] (Equation 3).
6: Compute R
?
? n,q [f ], compute p-value for sample (Equation 5)
7: Determine f ? arg minf ?{f1 ,f2 } R
output f ? , p-value.
Derivation 5 (Sampling for Squared Loss). Let ` be the squared loss, and let p(y|x; ?1 ) and
p(y|x; ?2 ) be Gaussian. When p(y|x) is approximated as in Equation 13, then the sampling distribution asymptotically maximizing ?n,q in a pool-based setting resolves to
s
2
2
q ? (x) ? 2 (f1 (x) ? f2 (x)) (f12 (x) + f22 (x) + ?x2 )? (f12 (x) ? f22 (x))
(14)
for all x ? D, where ?x2 denotes the sum of the variances of the predictive distributions at x ? D.
A proof is given in the online appendix. Variances of predictive distributions at instance x would be
available from a probabilistic model such as a Gaussian process [7]. If only predictions fj (x) but
no predictive distribution is available, we can assume peaked distributions with ?x2 ? 0, leading to
q ? (x) ? (f1 (x) ? f2 (x))2 ,
or we can assume infinitely broad predictive distributions with ?x2 ? ?, leading to
q ? (x) ? |f1 (x) ? f2 (x)|.
We refer to these baselines as active0 and active? .
Algorithm 1 summarizes the active model comparison algorithm. It samples n instances with replacement from the pool according to the distribution prescribed by Derivations 4 (for zero-one
loss) or 5 (for squared loss) and queries their label. Note that instances can be drawn more than
once; in the special case that the labeling process is deterministic, the actual labeling costs may thus
stay below the sample size. In this case, the loop is continued until the labeling budget is exhausted.
We have so far focused on the problem of comparing the risks of two prediction models, such as
a baseline and a challenger. We might also consider several alternative models; the objective of
an evaluation could be to rank the models according to the risk incurred or to identify the model
with lowest risk. Standard generalizations of the Wald test that compare multiple alternatives?for
instance, within-subject ANOVA [11]?try to reject the null hypothesis that the means of all considered alternatives are equal. Rejection does not imply that all empirically observed differences are
significant; for instance, the test could become significant because one of the alternatives performs
clearly worst. Choosing a sampling distribution q that maximizes the power of such a test would
thus in general not reflect the objectives of the empirical evaluation.
In practice, researchers often resort to pairwise hypothesis testing when comparing multiple prediction models. Accordingly, we derive a heuristic sampling distribution for the comparison of multiple
models ?1 , ..., ?k as a mixture of pairwise-optimal sampling distributions,
X
1
?
q ? (x) =
qi,j
(x),
(15)
k(k ? 1)
i6=j
?
qi,j
where
denotes the optimal distribution for comparing the models ?i and ?j given by Theorem 1.
When comparing multiple models, we replace Equation 13 by a mixture over all models ?1 , ..., ?k .
4
Empirical Results
We study the empirical behavior of active comparison (Algorithm 1, labeled active in all diagrams)
relative to a risk comparison based on a test sample drawn uniformly from the pool (labeled passive)
6
active
active?
passive
0.5
1
50
ARE
A2
IWAL
100
150
labeling costs n
0.6
0.4
0
50
100
150
labeling costs n
0.85
0.8
ARE
A2
IWAL
200
active
active?
active0
0.75
0.7
0.9
0.8
active
active?
passive
0.9
0.65
200
Object Recognition
(Classification, 13 Models)
0.2
0.95
200
400
600
labeling costs n
ARE
passive
Abalone
(Regression, 5 Models)
0.6
active
active?
active0
0.5
200
400
600
labeling costs n
0.9
0.85
0.8
ARE
passive
800
active
active?
active0
0.75
0.8
0.7
Inverse Dynamics
(Regression, 2 Models)
0.95
0.7
800
0.8
0.4
model selection accuracy
0.6
1
model selection accuracy
0.7
model selection accuracy
Abalone
(Regression, 2 Models)
0.8
0.4
model selection accuracy
Spam Filtering
(Classification, 2 Models)
model selection accuracy
model selection accuracy
0.9
200
400
600
labeling costs n
ARE
passive
800
Inverse Dynamics
(Regression, 5 Models)
0.7
0.6
0.5
active
active?
active0
0.4
200
400
600
labeling costs n
ARE
passive
800
Figure 1: Model selection accuracy over labeling costs for comparison of two prediction models
(top) and multiple prediction models (bottom). Error bars indicate the standard error.
and the baselines active6= , active0 , and active? discussed in Section 3.2. We also include the active
risk estimator presented in [8] in our study, which infers optimal sampling distributions q1? and q2? for
individually estimating the risks of the models ?1 and ?2 . Test instances are sampled from a mixture
distribution q ? (x) = 12 q1? (x) + 12 q2? (x) (labeled ARE). Each comparison method returns the model
with lower empirical risk and the p-value of a paired two-sided test. When studying classification,
we also include the active learning algorithms A 2 [1] and IWAL [2] as baselines by using them to
sample test instances. Their model space is the set of predictive models that are to be compared.
We conduct experiments in two classification domains (spam filtering, object recognition) and two
regression domains (inverse dynamics, Abalone) ranging from 4,109 to 169,612 instances. Kernelized logistic regression is employed for classification, Gaussian processes are employed for regression. In the spam filtering domain, we compare models that differ in the recency of their training
data. In the object recognition domain, we compare SIFT-based recognizers using different interest
point detectors (Harris operator, Canny edge detector, F?orstner operator) and visual vocabularies.
For regression, we compare models that differ in the choice of their kernel function (linear versus
Matern, polynomial kernels of different degrees). Models are trained on part of the available data;
the rest of the data serve as the pool of unlabeled test instances for which labels can be queried.
Results are averaged over 5,000 repetitions of the evaluation process. Further details on the datasets
and experimental setup are included in the online appendix.
4.1
Identifying the Model With Lower True Risk
We measure model selection accuracy, defined as the fraction of experiments in which an evaluation
method correctly identifies the model with lower true risk. The true risk is taken to be the risk over
all test instances in the pool. Figure 1 (top) shows that for the comparison of two models active
results in significantly higher model selection accuracy than passive, or, equivalently, saves between
70% and 90% of labeling effort. Differences between active and the simplified variants active0 ,
active? , and active6= are marginal. These variants do not require an estimate of p(y|x), thus the
method is applicable even if no such estimate is available. A 2 and IWAL coincide with active6=
(cf. Section 3.2). Figure 1 (bottom) shows results when comparing multiple models. In the object
recognition domain, active saves approximately 70% of labeling effort compared to passive. A 2 and
IWAL outperform passive but are less accurate than active. For the regression domains, active saves
between 60% and 85% of labeling effort compared to passive.
4.2
Significance Testing: Type I and Type II Errors
We now study how often a comparison method is able to reject the null hypothesis that two predictive
models incur identical risks, and the calibration of the resulting p-values. For classification, the
7
0.8
0.4
0.6
0.4
0.2
0.2
0
0
0.001
0.01
0.05
??level
0.1
average p?value
0.6
Average p?value
Inverse Dynamics (Regression)
0.3
passive
active
0.001
0.01
0.05
??level
0.2
0.1
0
0.1
passive
active
200
400
600
labeling costs n
Average p?value
Abalone (Regression)
average p?value
True Positive Significance
Abalone (Regression, n=800)
1
frequency
frequency
True Positive Significance
Inverse Dynamics (Regression, n=800)
1
passive
active
0.8
0.2
0.1
0
800
passive
active
0.3
200
400
600
labeling costs n
800
0.05
0
0
0.15
False Positive Significance
Inverse Dynamics (Regression, ?=0.05)
passive
active
0.1
0.05
0.05
0.1
??level
0.15
0.2
0
0
False Positive Significance
Abalone (Regression, ?=0.05)
0.05
0.1
??level
0.15
0.04
0.02
0
0.2
frequency
0.1
False Positive Significance
Abalone (Regression, n=800)
0.2
frequency
False Positive Significance
Inverse Dynamics (Regression, n=800)
0.2
passive
active
0.15
frequency
frequency
Figure 2: True-positive significance rate for different test levels ? (left, left-center). Average p-value
over labeling costs n (right-center, right). Error bars indicate the standard error.
passive
active
200
400
600
800
labeling costs n
0.04
0.02
0
passive
active
200
400
600
800
labeling costs n
Figure 3: False-positive significance rate over test level ? (left, left-center). False-positive significance rate over labeling costs n (right-center, right). Error bars indicate the standard error.
method active6= is equivalent to passive applied to D6= = {x ? D|f1 (x) 6= f2 (x)} (see Section 3.2).
Labeling effort is thus simply reduced by a factor of |D6= |/|D|. For regression, the analysis is less
straightforward as typically D = D6= . In this section we therefore focus on regression problems.
Figure 2 (left, left-center) shows how often the active and passive comparison methods are able to
reject the null hypothesis that the two models incur identical risk. The true risks incurred are never
equal in these experiments. We observe that active is able to reject the null hypothesis more often
and with a higher confidence. In the Abalone domain, active rejects the null hypothesis at ? = 0.001
more often than passive is able to reject it at ? = 0.1. Figure 2 (right-center, right) shows that active
comparison also results in lower average p-values, in particular for large n.
We also conduct experiments under the null hypothesis. Whenever a test instance x is sampled and
the predictions y = f1 (x) and y 0 = f2 (x) are queried, the predicted labels y and y 0 are swapped
with probability 0.5; this ensures that the true risks of f1 and f2 coincide. Figure 3 (left, left-center)
shows that Type I errors are well calibrated for both tests, as the false-positive rate stays below the
(ideal) diagonal line when plotted against ?. Figure 3 (right-center, right) shows that both tests are
slightly conservative for small n, and approach the expected false-positive rate as n grows larger.
We finally study a protocol in which test instances are drawn and labeled until the null hypothesis can
be rejected or the labeling budget is exhausted. Results (included in the online appendix) indicate
that active incurs the lowest average labeling costs, obtains significance results most often, and has
the lowest likelihood of incorrectly choosing the model with higher true risk.
5
Conclusion
We have derived the sampling distribution that asymptotically maximizes the power of a statistical
test that compares the risk of two predictive models. The sampling distribution intuitively gives
preference to test instances on which the models disagree strongly.
Empirically, we observed that the resulting active comparison method consistently outperforms a
traditional comparison based on a uniform sample of test instances. Active comparison identifies
the model with lower true risk more often, and is able to detect significant differences between
the risks of two given models more quickly. In the four experimental domains that we studied,
performing active comparison resulted in a saved labeling effort of between 60% and over 90%. We
also performed experiments under the null hypothesis that both models incur identical risks, and
verified that active comparison does not lead to increased false-positive significance results.
Acknowledgements
We wish to thank Paul Prasse for his help with the experiments on object recognition data.
8
References
[1] M. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In Proceedings of the
23rd International Conference on Machine Learning, 2006.
[2] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In Proceedings of the 26th International Conference on Machine Learning, 2009.
[3] J. Geweke. Bayesian inference in econometric models using monte carlo integration. Econometrica, 57(6):1317?1339, 1989.
[4] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer, 2001.
[5] O. Madani, D. J. Lizotte, and R. Greiner. Active model selection. In Proceedings of the 20th
Conference on Uncertainty in Artificial Intelligence, 2004.
[6] O. Maron and A. W. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In Proceedings of the 6th Annual Conference on Neural
Information Processing Systems, 1993.
[7] Carl Edward Rasmussen and Christopher Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[8] C. Sawade, N. Landwehr, S. Bickel, and T. Scheffer. Active risk estimation. In Proceedings of
the 27th International Conference on Machine Learning, 2010.
[9] C. Sawade, N. Landwehr, and T. Scheffer. Active estimation of f-measures. In Proceedings of
the 23rd Annual Conference on Neural Information Processing Systems, 2010.
[10] T. Scheffer and S. Wrobel. Finding the most interesting patterns in a database quickly by using
sequential sampling. Journal of Machine Learning Research, 3:833?862, 2003.
[11] D. Sheskin. Handbook of Parametric and Nonparametric Statistical Procedures. Chapman &
Hall, 2004.
[12] L. Wasserman. All of Statistics: a Concise Course in Statistical Inference. Springer, 2004.
9
| 4495 |@word version:1 polynomial:1 instrumental:7 seek:1 q1:3 concise:1 incurs:1 thereby:2 liu:1 contains:1 outperforms:1 comparing:9 beygelzimer:3 dx:6 readily:2 designed:1 intelligence:1 sawade:4 selected:2 accordingly:1 inspection:1 provides:1 location:2 preference:1 become:1 shorthand:1 prove:2 pairwise:2 expected:2 behavior:1 resolve:2 actual:1 estimating:1 maximizes:6 agnostic:1 null:16 lowest:4 minimizes:3 q2:7 informed:1 finding:2 preferable:1 normally:5 unit:1 positive:13 before:1 approximately:2 might:2 iwal:8 studied:2 christoph:1 averaged:1 testing:2 practice:4 tribution:1 implement:1 procedure:1 strasse:1 empirical:12 reject:7 significantly:1 pre:2 confidence:6 cannot:1 unlabeled:4 selection:14 operator:2 recency:1 risk:47 equivalent:1 deterministic:1 center:8 maximizing:4 straightforward:1 williams:1 focused:1 identifying:1 wasserman:1 estimator:4 continued:1 deriving:2 his:1 carl:1 hypothesis:17 approximated:4 recognition:5 labeled:6 database:2 observed:5 bottom:2 worst:1 calculate:1 ensures:1 complexity:2 econometrica:1 tobias:1 dynamic:7 trained:2 solving:1 exposed:2 predictive:14 serve:2 incur:4 f2:25 chapter:2 derivation:6 train:1 monte:2 query:4 artificial:1 labeling:30 tell:1 choosing:3 heuristic:1 larger:1 solve:1 favor:1 statistic:11 online:6 advantage:1 canny:1 loop:1 degenerate:1 converges:1 object:6 help:1 derive:6 edward:1 c:1 predicted:1 come:1 implies:7 indicate:4 differ:2 correct:1 saved:1 filter:1 human:1 require:1 f1:24 generalization:1 hold:5 sufficiently:2 considered:2 hall:1 normal:5 exp:2 substituting:1 landwehr:4 bickel:1 a2:2 niels:1 estimation:3 outperformed:1 applicable:1 label:10 individually:1 repetition:1 weighted:2 mit:1 challenger:3 clearly:2 always:1 gaussian:5 rather:1 pn:1 derived:1 focus:2 consistently:3 rank:1 likelihood:6 normalizer:1 lizotte:1 baseline:10 detect:1 bebel:1 inference:2 typically:2 kernelized:1 selects:2 germany:1 arg:3 classification:11 constrained:1 special:2 integration:1 marginal:1 equal:4 once:1 never:1 sampling:37 zz:2 identical:4 labeler:1 broad:1 chapman:1 minf:1 peaked:1 discrepancy:3 resulted:1 madani:2 replaced:2 replacement:2 n1:1 interest:1 investigate:1 evaluation:5 mixture:4 extreme:1 held:3 accurate:2 integral:2 edge:1 conduct:2 desired:1 plotted:1 theoretical:2 instance:27 increased:1 cost:19 deviation:1 subset:1 euler:1 uniform:2 calibrated:2 density:1 explores:1 international:3 preferring:1 stay:2 probabilistic:1 pool:11 quickly:2 squared:5 reflect:3 choose:1 possibly:1 hoeffding:2 f22:2 resort:1 leading:2 return:1 account:1 de:1 student:1 race:2 depends:2 tion:1 try:1 matern:1 performed:1 apparently:1 observing:1 contribution:1 f12:2 ass:1 minimize:2 accuracy:11 variance:12 yield:2 identify:2 bayesian:1 carlo:2 researcher:1 detector:2 reach:2 whenever:2 against:1 acquisition:1 frequency:6 associated:2 proof:5 sampled:2 popular:1 recall:1 lim:3 infers:1 geweke:2 organized:1 higher:4 dt:2 strongly:1 rejected:2 until:2 langford:2 christopher:1 defines:1 logistic:1 maron:1 scientific:1 grows:2 effect:1 unbiased:1 true:16 multiplier:1 analytically:1 q0:2 moore:1 inferior:1 coincides:2 abalone:8 generalized:1 argues:1 performs:1 fj:11 passive:22 balcan:2 image:2 ranging:1 consideration:1 common:2 superior:2 functional:1 empirically:3 discussed:1 refer:1 significant:3 queried:2 rd:2 consistency:1 i6:1 calibration:1 recognizers:3 longer:1 resubstitution:1 recent:1 apart:1 scenario:2 certain:1 binary:1 yi:5 devise:1 seen:2 employed:4 converge:1 determine:1 period:1 ii:1 multiple:6 reduces:1 exceeds:1 characterized:1 compensate:2 long:1 paired:2 qi:2 prediction:9 variant:2 regression:23 wald:2 vision:1 normalization:1 kernel:2 whereas:1 uninformative:1 addressed:1 diagram:1 appropriately:1 biased:1 rest:2 swapped:1 subject:1 ideal:1 affect:1 reduce:1 accelerating:1 effort:6 generally:1 involve:1 sn2:1 nonparametric:1 reduced:1 outperform:1 track:1 correctly:1 dasgupta:1 four:1 threshold:2 drawn:7 anova:1 verified:1 econometric:1 asymptotically:15 fraction:1 sum:5 inverse:7 parameterized:1 uncertainty:1 laid:1 reasonable:1 draw:3 decision:4 prefer:3 dy:4 appendix:6 summarizes:1 bound:2 guaranteed:1 oracle:1 annual:2 precisely:1 constraint:2 x2:4 prescribed:2 optimality:1 performing:1 department:1 according:10 slightly:1 maxy:1 intuitively:2 sided:2 taken:1 equation:17 turn:1 discus:1 end:1 studying:1 available:6 generalizes:1 observe:1 save:3 alternative:4 denotes:4 top:2 include:2 cf:1 giving:1 objective:2 quantity:1 occurs:1 strategy:2 costly:2 parametric:1 diagonal:1 traditional:1 thank:1 d6:3 assuming:1 minimizing:1 acquire:1 equivalently:2 setup:1 unknown:2 perform:1 disagree:1 datasets:1 finite:1 introspective:1 incorrectly:1 situation:1 subsume:1 august:1 required:2 specified:1 potsdam:3 address:2 able:5 bar:3 below:3 pattern:1 confidently:3 max:1 power:9 critical:1 imply:1 identifies:2 concludes:1 sn:15 acknowledgement:1 determining:1 relative:2 asymptotic:6 loss:10 highlight:1 interesting:1 filtering:3 analogy:1 var:4 versus:1 incurred:2 degree:1 consistent:5 course:1 keeping:1 rasmussen:1 dis:1 fall:1 face:1 absolute:4 distributed:5 calculated:1 vocabulary:1 cumulative:2 made:3 coincide:2 simplified:2 spam:4 party:1 far:1 approximate:2 observable:2 uni:1 obtains:1 sz:1 active:62 handbook:1 xi:13 search:1 quantifies:2 obtaining:2 unavailable:1 necessarily:3 domain:9 protocol:1 significance:12 paul:1 scheffer:5 deployed:1 precision:1 wish:1 candidate:2 governed:2 third:1 supplier:1 weighting:3 removing:1 theorem:6 wrobel:1 sift:1 evidence:1 derives:2 false:9 sequential:2 importance:1 budget:5 exhausted:2 rejection:1 simply:1 infinitely:1 visual:1 greiner:1 lagrange:2 acquiring:1 springer:2 chance:2 satisfies:1 harris:1 conditional:2 goal:5 towards:1 replace:1 change:1 experimentally:1 included:5 folded:3 uniformly:1 principal:1 lemma:6 called:1 conservative:1 experimental:2 indicating:2 formally:1 latter:1 |
3,861 | 4,496 | Learning Multiple Tasks using Shared Hypotheses
Koby Crammer
Department of Electrical Enginering
The Technion - Israel Institute of Technology
Haifa, 32000 Israel
[email protected]
Yishay Mansour
School of Computer Science
Tel Aviv University
Tel - Aviv 69978
[email protected]
Abstract
In this work we consider a setting where we have a very large number of related
tasks with few examples from each individual task. Rather than either learning
each task individually (and having a large generalization error) or learning all the
tasks together using a single hypothesis (and suffering a potentially large inherent
error), we consider learning a small pool of shared hypotheses. Each task is then
mapped to a single hypothesis in the pool (hard association). We derive VC dimension generalization bounds for our model, based on the number of tasks, shared
hypothesis and the VC dimension of the hypotheses class. We conducted experiments with both synthetic problems and sentiment of reviews, which strongly
support our approach.
1
Introduction
Consider sentiment analysis task for a set of reviews for different products. Each individual product
has only very few reviews, which does not enable reliable learning. Furthermore, reviewers may use
different amount and level of superlatives to describe the same sentiment level, or feeling different
sentiment level yet describing the product with the same text. For example, one may use the sentence ?The product is OK? to describe the highest-satisfaction, while another would use ?Its a great
product, but not amazing? to describe some notion of disappointment. Should one build individual
sentiment predictors, one per product, based on small amount of data, or build a single sentiment
predictor for all products, based on mixed input with potentially heterogeneous linguistic usage?
One methodology is to cluster individual products to categories, and run the learning algorithm
on the aggregated data. While in some cases the aggregation might be simple, in other cases it
might be a challenge. (For example, you can cluster restaurants by the cuisine, by the price, by the
location, etc.) In addition, the different tasks might be somewhat different on both domain (text
used) or predictions (sentiment association with given text), which may raise the dilemma between
clustering related tasks or related domain.
In this work we propose an alternative methodology. Rather than clustering the different tasks before
the learning, perform it as part of the learning task. Specifically, we consider a very large number
of tasks, with only few examples from each domain. The goal is to output a pool of few classifiers,
and map each task to a single classifier (or a convex combination of them). The idea is that we can
control the complexity of the learning process by deciding on the size of pool of shared classifiers.
This is a very natural approach, in such a setting.
Our first objective it to study the generalization bounds for such a simple and natural setting. We
start by computing an upper and lower bounds on the VC dimension, showing that the VC dimension
is at most O(T log k + kd log(T kd)), where T is the number of domains, k the number of shared
hypothesis and d the VC dimension of the basic hypothesis class. We also show a lower bound
of max{kd , T min{d, log k}}. This shows that the dependency on the number of tasks (T ) and
1
the number of shared hypothesis (k) is very different, namely, increasing the number of shared
hypothesis increases the VC dimension only logarithmically.
N
q This will
imply that if we have
q
log
k
dk
d
?
examples per task, the generalization error is only O
compared to O
N + TN
N
when learning each task individually. So we have a significant gain when log k N and k T ,
which is a realistic case. We also derived a K-means like algorithm to learn such classifiers, of both
models and association of tasks and models.
Our experimental results support the general theoretical framework introduced. We conduct experiment with both synthetic problems and sentiment prediction, with number of tasks ranging beween
30 ? 370, some contain as high-as 18 examples in the training set. Our experimental results strongly
support the benefits from the approach we propose here, which attains lower test error compared
with learning individual models per task, or a single model for all tasks.
Related Work
In the recent years there is increasing body of work on domain adaptation and multi-task learning. In
domain adaptation we often assume that the tasks to be performed are very similar to each other, yet
the data comes from different distributions, and often there is only unlabeled data from the domain
(or task) of interest. Mansour et al. [18] develop theory when distribution of the problem of interest
(called target) is a convex combination of other distributions for which samples from each is given.
Ben-David et al. [6] focused in classification and developed a distance between distributions and
used it to develop new generalization bounds when training and test examples are not coming from
the same distributions. Mansour et al. [19] built on that work and developed new distance and theory
for adaptation problems with arbitrary loss functions. See also a recent result of Blanchard et al [7].
Another direction of research is to learn few problems simultaneously, yet, unlike in domain adaptation, assuming examples are coming from the same distribution. Obozinski et al. [20] proposed to
learn one model per task, yet find a small set of shared features using mixed-norm regularization.
Argyriou et al. [4] took a similar approach, yet with added complexity that the feature space can
also be rotated before choosing this small shared set. Ando and Zhang [2], and Amit et al. [1], learn
by first finding a linear transformation shared by all tasks, and then individual models per task. The
first formulation is not convex, while the later is. Evgeniou [13] and Daume [15] proposed to combine two models, one individual per task and the other shared across all tasks, and combine them
at test time, while later Evgeniou et al. [12] proposed to learn one model per task, and force all the
models to be close to each other. Finally, there exists large body of work on multi-task learning in
the Bayesian setting, where a shared prior is used to connect or related the various tasks [5, 22, 16],
while other works [17, 21, 9] are using Gaussian process predictors.
The work most similar to our is of Crammer et al. [11, 10] whom developed theory for learning a
model with few datasets from various tasks, assuming they are sampled from the same source. They
assumed that the relative error (or a bound over it) is known, and proved generalization bound for
that task, their bounds proposed to use some of the datasets, but not all, when building a model for
the main task. Yet, it was performed before seeing the data and having the strong assumption of the
discrepancy between tasks. We do not assume this knowledge and learn few tasks simultaneously.
2
Model
There is a set T of T tasks and with each task t there is an associated distribution Dt over inputs
(x, y), where x ? Rr and y ? Y. We assume binary classification tasks, i.e., Y = {+1, ?1}. For
t
each task t ? T has a sample of size nt denoted by St = {(xt,i , yt,i , t)}ni=1
drawn from Dt , where
r
xt,i ? R is the i-th example in the t-th domain and yt,i ? Y is the corresponding label. (Note
that the name of the domain is part of the example, so there is no uncertainty regarding from which
domain the example originated from.)
A k-shared task classifier is a pair (Hk , g), where Hk = {h1 , . . . , hk } ? H is a set of k hypotheses
from a class of functions H = {h : Rr ? Y}. The function g maps each task t ? T to the
hypotheses pool Hk , where the mapping is to a single hypothesis (hard association). We denote by
K = {1, . . . , k} the index set for Hk .
2
In the hard k-shared task classifier, g maps each task t ? T to one hypothesis in hi ? Hk , i.e., g :
T ? K. Classifier (Hk , g), given an example (x, t), first computes the mapping from the domain
name t to the hypotheses hi , where i = g(t), and then predicts using the corresponding function hi ,
i.e., the prediction is hg(t) (x). The class of hard k-shared task classifiers using hypotheses class H
includes all such (Hk , g) classifiers, i.e., fHk ,g : Rr ? T ? Y, where fHk ,g (x, t) = hg(t) (x), and
the class is FH,k = {fHk ,g : |Hk | = k, Hk ? H, g : T ? K}.
3
Hard k-shared Task Classifiers: Generalization Bounds
We envision the following learning process. Given the training sets St , for t ? T , the learner
outputs at the end of the training phase both Hk and g, where Hk is composed from k hypotheses
h1 , . . . , hk ? H and g : T ? K. Naturally, this implies that there is potentially overfitting in both
the selection of Hk and the mapping g.
Our main goal in this section is to bound the VC dimension of the resulting hypothesis class FH,k ,
assuming the VC dimension of H is d. We show that the VC dimension of FH,k is at most
O(T log k + kd log(T kd)) and at least ?(T log k + dk).
Theorem 1. For any hypothesis class H of VC-dimension d, the class of hard k-shared task classifiers FH,k has VC dimension at most the minimum between dT and O(T log k + kd log(T kd)).
Proof: Our main goal is to derive an upper bound on the number of possible labeling using a hard
k-shared task classifiers FH,k . Once we establish this, we can use Sauer lemma to derive an upper
Pd
bound on the VC dimension [3]. Let ?d (m) = j=0 m
on the number of
j be an upper bound P
labeling over m examples using a hypothesis class of VC dimension d. Let m = t?T nt the total
sample size.
We consider all mapping g of the T tasks to Hk , there are k T such mappings. Fix a particular
mapping g where hypothesis hj has tasks S j ? T assigned to it. (At this point hj ? H is not
fixed yet,
P we are only fixing g and the tasks that are mapped to the j hypothesis in Hk .) There are
mj = t?S j nt examples for the tasks in S j , and therefore at most ?d (mj ) labeling. (Note that
the labeling are using any h ? H.) We can upper bound the numbers of labeling any hypothesis
Qk
P
pool Hk by j=1 ?d (mj ). Since m = j mj , this bound is maximized when mj = m/k, and this
implies that the number of labeling is upper bounded by k T (em/dk)dk .
Now we would like to upper bound the VC dimension of FH,k . When m is equal to the VC dimension we have 2m different labeling induced on the m points. Hence, it has to be the case that,
em kd
2m ? k T
.
dk
We need to find the largest m for which m ? kd log(em/dk) + T log k ? T log k + kd log(e/dk) +
kd log m ? T log k + kd log m for dk ? 3. Note that for ? ? 2 and ? ? 1, we have that if
m < ? + ? log(m) then m < ? + 16? log(??). This implies that
m ? T log k + 16kd log(T dk log k) = O (T log k + kd log(T kd)) ,
which derives an upper bound on the number of points that can be shattered, and hence the VC
dimension.
To show the upper bound of dT , we simply let each domain select a separate hypothesis from H.
Since H has VC dimension d, there are at most d examples that can be shattered in each task, for a
total of dT .
As an immediate corollary we can derive the following generalization bound, using the standard VC
dimension generalization bounds [3]. For simplicity we assume that the distribution over the tasks
is uniform, we define the true error as e(fHk ,g ) = Pr(x,y,t) [fHk ,g (x) 6= y], and the empirical (or
training) error as
PT Pnt
I[fHk ,g (xt,i ) 6= yt,i ]
,
(1)
e?(fHk ,g ) = t=1 i=1
m
P
where m = t nt is the sample size, and I(a) = 1 iff the predicate a is true. We can now state the
following corollary, which follows from standard generalization bounds,
3
Input parameters: k - number of models to use, N - number of iterations, ? - fraction of data for split
Initialize:
? Set random partition St1 ? St2 = St where St1 ? St2 = ? and |St1 |/|St | = ? for t = 1 . . . T
? Set g(t) = Jt where Jt is drawn uniform from {1...k}
For i = 1, . . . , N
1. Set hj ? learn(?t?Ij St1 , H) where Ij = {i : g(i) = j}.
P
2. Set g(t) = arg minkj=1 |S12 | (x,y)?S 2 I[hj (x) 6= y].
t
t
Set hj ? learn(?t?Ij St , H) where Ij = {i : g(i) = j}.
Output: fHk ,g (x) where Hk = {h1 , . . . , hk }
Figure 1: The SHAMO algorithm for learning shared models.
Corollary 2. Fix k. For any hypothesis class H of VC-dimension d, for any hard k-shared task
classifier f = (Hk , g) we have that with probability 1 ? ?,
!
r
(T log k + kd log(T kd)) log(m/T ) + log 1/?
|e(f ) ? e?(f )| = O
.
m
The previous corollary holds for some fixed k known before observing the training data, we now
state a bound where k is chosen after seeing the data, together with g and Hk . The proof follows
from the previous corollary and performing a union bound on the different values of k,
Corollary 3. For any hypothesis class H of VC-dimension d, for any k, for any hard k-shared task
classifier f = (Hk , g) we have that with probability 1 ? ?,
!
r
(T log k + kd log(T kd)) log(m/T ) + log(k/?)
|e(f ) ? e?(f )| = O
.
m
The last two bounds state that empiricalP
error is close to true error under two conditions, first that
T log k is small in compared with m = t nt . That is, the average number of examples (per task),
should be large compared to the log-number-of models. Thus, even with a dozen models, only few
tens of examples are suffice. Second, that kd is small compared with m. The main point is that if the
VC dimension is large and the average number of examples m/T is low, it is possible to compensate
if the number of models k is small relative to the number of tasks T . Hence, we expect to improve
performance over individual models if there are many-tasks, yet we predict with relative few models.
We now show that our upper bound on the VC dimension is almost tight.
Theorem 4. There is a hypothesis class H of VC-dimension d, such that the class of hard k-shared
task FH,k has VC dimension at least max{kd , T min{d, log k}}.
Proof: To show the lower bound of kd consider d points that H shatters, x1 , . . . , xd . Consider the
set of examples S = {(xi , j) : 1 ? j ? k, 1 ? i ? d}. For any labeling of S, we can select for
each domain j a different hypothesis from H that agrees with the labeling. Since we have only k
different js, we can do it with k functions. Therefore we shatter S and have a lower bound on kd.
Let ` = min{d, log k}, hence the second bound is T `. Since class H is of VC dimension d, this
implies that there are points x1 , . . . , x` and function h1 , . . . hk ? H, such that for any labeling of
xi ?s there is a hypothesis hj which is consistent with it. (Since k hypotheses can shatter at most
log k points, we get the dependency on log k.) Let the sample be S = {(xi , t) : 1 ? i ? `, t ? T }.
For any labeling of S, when we consider domain t ? T , there is a function in hi ? Hk which is
consistent with the labeling. Therefore the VC dimension is at least T `.
4
Learning with SHAred MOdels (SHAMO) Algorithm
The generalization bound states that we should find a pair (Hk , g) that perform well on the training
data and that k would be small a-priori. We assume that there is a learning algorithm from H called
4
50
4
40
6
20
18
0
?2
?4
30
Shared
Individual
SHAMO
20
10
?6
16
30
Avearge Error
Avearge Error
2
Effective No. Models
40
14
12
Shared
Individual 10
SHAMO
8
20
6
10
Effective No. Models
4
4
2
?8
0
2
?10
?6
?4
?2
0
2
(a) Data Synthetic I
4
4
6
8
10
No Models ( K )
(b) Error I
12
2
14
0
0
20
40
60
No Models ( K )
80
0
100
(c) Error II
Figure 2: Left: Illustration of data used in the first experiment. The middle (experiment I) and right
(experiment II) panels shows the average error vs k for the three algorithms, and the ?effective?
number of models vs k (right axis).
? ? learn(S, H) has
with a training set S. Formally, we assume that the hypothesis returned by h
lowest training error, that is the algorithm performs empirical risk minimization. We propose to
perform an iterative procedure, between two stages, which intuitively is similar to K-means.
In the first stage, the algorithm fixes the assignment function g and find the best k functions Hk . This
can be performed easily by calling k times any algorithm that learns with the hypothesis class H. On
each call the union of the training sets that are assigned by g the same value is fed into the algorithm.
Formally, for all j = 1 . . . k set, hj ? learn(?t?Ij St , H) where Ij = {i : g(i) = j}. In the second stage we learn the association g given Hk . Here we simply set g(t) to be the model which attains
Pnt
the lowest error evaluated on the training set, that is, g(t) = arg minkj=1 n1t i=1
I[hj (xt,i ) 6= yt,i ] .
This procedure can be repeated for a fixed number of iterations, or until a convergence criteria is
met. Specifically, in the experiments below our algorithm iterated between the step exactly 10 times.
Clearly, each stage reduces the training error of (1), but how far the resulting hypotheses from the
optimal one is not clear.
In the description above the training sets St was used twice, once for finding Hk and once for
finding g. We found in practice that this leads to over-fitting, that is, in the second stage suboptimal hypotheses are assigned to g if evaluated on the test set (which clearly is not known during
training time.) We thus modify the algorithm above, and use only part of the training set for each
of the tasks, where these parts not over overlapping. Formally, before performing the iterations the
algorithm partitions the training set, into two parts, St1 ? St2 = St where St1 ? St2 = ?. Then the
first stage is performed by calling the learning procedure with the first set and the second with the
second set. Only after iterations are concluded, we use the entire training set to build models, with
out modifying the association function g. We call this algorithm SHAMO for learning with shared
models. The algorithm is summarized in the Fig. 1.
5
Empirical Study
We evaluated our algorithm on both synthetic and real-world sentiment classification task. Training
was performed using the averaged-Perceptron [14] executed for 10 iterations. Three methods are
evaluated, learning one model per task, called Individual below, learning one model for all tasks
called Shared below, and learning k models using our algorithm, SHAMO. We also implemented an
online version of a batch algorithm for this setting [4]. SHAMO was outperformed it in the majority
of experiments. Full details will be included in a long version of this paper.
Synthetic Data: We first report results using synthetic data. We generated 20 dimensional inputs x ? R20 . All features were drawn from Gaussian with mean zero. The first two inputs of
tasks t were drawn with a covariance specific for that tasks. The remaining 18 features were with
isotropic covariance. The label of input x = (x1 , x2 , ..., x20 ) was set to be sign(x2 ? st ) where
st ? {?1, +1} with probability half. We generated T = 200 such tasks each with 6 training examples (with at least one example from each class), and ran our algorithm for various values of k.
Models were evaluated on tests sets of size n = 1, 000 for each task. The results below are averages
5
over 50 random repetitions of the data generation process. Plot of test set (with T = 9 for ease of
presentation) appear in the left-panel of Fig. 2, clearly two models are enough to classify all tasks
correctly (depending on the value of st above), and furthermore, applying the wrong model yields
test error of 100%. All 6 examples were used both to build models and associating models to tasks.
35
Shared
Individual
30
error
25
20
15
10
10
15
20
25
error SHAMO
30
35
(a) Error of Individual and Shared
vs. Error of Shamo
30
10
Shared
Individual
SHAMO
Avearge Error
6
20
4
15
10
2
2
4
6
8
10
No Models ( K )
12
0
14
Effective No. Models
8
25
The results are summarized in middle panel of Fig. 2, in which
we plot mean error of the three algorithms vs the number of
models k, with error bars for 95% confidence interval. Since
both Individual and Shared are independent of k, the line is
flat for them. It is clear that Shared performs worst with an
average error of 50% (highest line), which is explained by the
fact that the test error of half of the models over the other
half of the data-sets is about 100%. Individual performs second, with test error of about 30% obtained by only 6 training
examples. Our algorithm, SHAMO, performs the best with
error of about 5% when allowing k = 2 models, and about
10% when allowing k = 14 models. The dotted-black line
indicates the number of ?effective? models per value of k,
which is the smallest number of models which at least 90 tasks
are associated with (exactly) one of them. The corresponding
scale is the right axis. Indeed as the number of possible models k is increased to 14, the number of effective models is also
increased, but only moderately, from an average of 2 to an average of 3.5. In other words, only small number of models are
used in practice, which avoids severe overfitting.
The next synthetic experiment was performed with 10 target
models and more noise. Here, we generated 40 dimensional
Figure 3: Results for Data A (31 inputs x ? R40 . All features were drawn from Gaussian with
Tasks, 1 Thresh)
mean zero. The first ten inputs of tasks t were drawn with a
covariance specific for that tasks. The remaining 30 features
were with isotropic covariance. The label of input x = (x1 , x2 , ...x40 ) was set to be sign(ut ?
(x1 . . . x10 )) where ut ? R10 are a set of 10 orthogonal vectors, chosen uniformly in random. As
in the first experiment, we generated T = 200 such tasks, each with 25 training examples, and ran
SHAMO with values of k ranging between 2 and 100. Models were evaluated on tests sets of size
n = 1, 000 for each task. The results below are averages over 50 random repetitions of the data
generation process. In these experiments ten models are enough to classify all tasks correctly, yet in
this experiment, applying the wrong model yields test error of only 50%. Out of the 25 examples
available for each task, 7 were used to build models, and the remaining 18 were used to associate
models to tasks (? = 7/25). Lower values cause overfitting, while higher values yield poor models.
(b) Error vs. k
The results are summarized in right panel of Fig. 2, in which we plot mean error of the three algorithms vs the number of models k, with error bars for 95% confidence interval. The bottom line is
similar to the previous experiment. As before, Shared performs worst, Individual performs second,
with test error of about performing second with about 15% obtained with 25 training examples. Our
algorithm, SHAMO, performs the best with error of about 11% when allowing k = 22 models, twice
the number of real models. Additionally, it seems that the algorithm was not-overfitting, even when
the number of allowed models was set to 100 the performance was the same as setting k = 25.
One possible explanation is that the algorithm is not using all allowed models, indeed the number
of ?effective? models (which are associated to 90% of the tasks) grows moderately for number of
models greater than 25 (from 14 to 16). In other words, if we allow the algorithm to remove about
10% of the tasks, then only 14 ? 16 models are enough to have about 11% test error on average. It
is not clear to us yet, why over-fitting occurred in the first experiment but not in the second.
Sentiment Data:
We followed Blitzer et.al [8] and evaluated our algorithm also on product
reviews from Amazon. We downloaded 2, 000 reviews from 31 categories, such as books, dvd
and so on; a total of 62, 000 reviews all together. All reviews were represented using bag-ofunigrams/bigrams, using only features that appeared at least 5 times in all training sets, yielding
a dictionary of size 28, 775. The reviews we used were originally labeled with 1, 2, 4, 5 stars, as
reviews with 3 stars were very hard to predict, even with very large amount of data.
6
45
45
Shared
Individual
40
55
45
Shared
Individual
40
Shared
Individual
40
Shared
Individual
50
45
35
30
40
error
30
error
35
error
error
35
30
25
25
25
20
20
20
35
30
25
15
15
45
(a) Data B, 62 Tasks
38
36
6
28
4
26
24
2
22
34
Avearge Error
32
30
30
35
error SHAMO
40
38
8
Effective No. Models
Avearge Error
34
25
15
15
45
(b) Data C, 124 Tasks
10
Shared
Individual
SHAMO
36
20
36
6
30
28
4
26
24
34
4
6
8
10
No Models ( K )
12
0
14
(e) Data B, 62 Tasks
20
2
40
6
30
28
4
26
2
6
8
10
No Models ( K )
12
0
14
20
2
(f) Data C, 124 Tasks
30
40
error SHAMO
50
38
36
32
20
(d) Data E, 248 Tasks
8
22
4
15
45
10
Shared
Individual
SHAMO
24
2
22
20
2
30
35
error SHAMO
38
8
32
25
(c) Data D, 186 Tasks
10
Shared
Individual
SHAMO
20
20
34
10
Shared
Individual
SHAMO
8
32
6
30
28
4
26
24
Effective No. Models
40
Avearge Error
30
35
error SHAMO
Effective No. Models
25
Avearge Error
20
Effective No. Models
15
15
2
22
4
6
8
10
No Models ( K )
12
0
14
(g) Data D, 186 Tasks
20
2
4
6
8
10
No Models ( K )
12
0
14
(h) Data E, 248 Tasks
Figure 4: Top: test error of Individual and Shared algorithms vs test error of SHAMO for k = 14, for all
datasets with 2 thresholds. Bottom: average error vs k for the three algorithms, and the ?effective? number of
models vs k (right axis).
Data
A
B
C
D
E
F
G
H
I
Thresholds
1
2
2
2
2
3
3
3
3
No. Tasks
31
62
124
186
248
93
186
279
372
Training Size
220
108
54
36
27
72
36
24
18
Test Size
1,780
892
446
297
223
592
296
197
148
We generated three binary prediction datasets as follows. In the
first dataset, the goal was to predict
whether the number of stars associated with a review is above or below 3. Since we focus in the case
of many tasks with small amount of
data each, we used about 1/9 of the
data for training and the remaining
for evaluation. Each set (training
Table 1: Summary of sentiment datasets used.
and test) contains equal amount of
reviews with the 1, 2, 4, 5 stars. The outcome of this process are 31 tasks, each with 220 training
examples and 1, 780 test examples. This dataset is in row A of Table 1.
For the second dataset we partitioned all reviews from each category into two equal sets. The
prediction problem for the first was to predict if the number of stars is 5 stars or not (that is, below
5). For the second set of problems the goal was to predict if the number of stars is 1 or not. The
outcome are 62 tasks with 108 training examples and 892 test examples. We refer to this problem as
having 2 thresholds (5 and 1). This dataset is row B of Table 1. For the third dataset we partitioned
the reviews into three sets, using one of the three goals above - is the number of starts above or
below 1, is it above or below 3, and is it above or below 5 - ending up with 93 tasks with 72 training
examples and 592 test examples in each. We refer to this problem as having 3 thresholds (5, 3 and
1). This dataset is row F in Table 1. Finally, we took each of the last two problems and divided each
task into 2, 3 or 4 - rows C,D,E (2 thresholds) and, rows G,H,I (3 thresholds). Our setting with few
thresholds represent different language usages, from mild to strong, for the same level of sentiment.
Unlike in the synthetic experiments training data was either used for building models, or associating
models to tasks. That is, we set |St1 | = |St2 | = 0.5|St | for ? = 0.5, and used one half of the examples
to build models (set the weights of prediction functions), and the remaining half to evaluate each of
the k models on the T tasks, and associating models to tasks. Only after this process ended, we fixed
this association and learned models using all training points to build final models.
The results for dataset A of single threshold appear in Fig. 3. The top panel shows the error of Individual and Shared vs SHAMO for k = 14. Points above the line y = x indicate the superiority
of SHAMO. Although we used reviews from 31 domains, there is essentially a single task, and thus
it is best to combine the data. Indeed, all the red-squares (corresponding to Individual) are above
the blue-circles (corresponding to Shared), indicating that the shared model outperforms individual models. Additionally, all points corresponding to Shared lies on the diagonal, indicating that
7
4
26
6
30
28
4
26
2
24
22
2
4
(a)
6
8
10
No Models ( K )
12
Data F, 93 tasks
34
6
30
28
4
26
2
22
2
(b)
6
8
10
No Models ( K )
12
10
Shared
Individual
SHAMO
8
32
6
30
28
4
26
2
24
4
36
8
32
24
0
14
10
Shared
Individual
SHAMO
0
14
22
2
2
24
4
(c)
Data G, 186 tasks
Effective No. Models
28
34
Avearge Error
30
36
8
32
Avearge Error
6
10
Shared
Individual
SHAMO
Effective No. Models
34
32
Avearge Error
36
8
Avearge Error
10
Shared
Individual
SHAMO
Effective No. Models
34
Effective No. Models
36
6
8
10
No Models ( K )
12
Data H, 279 tasks
0
14
22
2
4
(d)
6
8
10
No Models ( K )
12
0
14
Data I, 372 tasks
Figure 5: Average error vs k for the three algorithms, and the ?effective? number of models vs k (right axis).
SHAMO is performing as well as Shared, with error ? 16%. The bottom panel shows the performance of SHAMO vs. k. As shown, the error is fixed and is not affected by k. This is explained
by the black-dashed-line that, as before, shows the number of ?effective? models, which is 1. Even
though the algorithm may choose up to 14 models, it is always using effectively one.
The results for datasets B-E all with two thresholds are summarized in Fig. 4. The top panels show
the test error of Individual and Shared algorithms vs test error of SHAMO for k = 14, with number
of tasks increasing from left to right. First, as opposed to dataset A with single threshold, in all
cases the results for Shared are worse than these of Individual. This gap is getting smaller with the
number of tasks (the clouds are overlapping as we go from left panel to right). intuitively, Shared
introduces (label) bias as the two thresholds are being treated as one, while Individual introduces
variance as smaller and smaller training sets are used, as we go from the left panel to the right one,
the gap between bias and variance shrinks as the variance is increased. SHAMO performs the best
as in all plots almost all the points (less in the right plot) are above the line y = x. Additionally, the
spread of the cloud in the top-panels is getting larger, indicating larger deviation in the performance
across different tasks.
The bottom panels of Fig. 4 shows the average test error vs k. As Shared is not affected by k nor
T (as total training examples remains the same), its test error of 36% is fixed across panels. As
we have more tasks, and less training examples per task, the test error of Individual increases from
25.6% to 28.9% (gap of 3.3%). SHAMO performs the best, and is also affected from smaller dataset,
with test error ranging between 21.8 and 24.3, having a smaller gap than Individual of 2.5%). In all
four dataset the optimal number of models is k = 3, and there is minor overfitting when using larger
values (at most 1%). As before the effective number of models grows weakly with k.
The results for datasets F-I all with three thresholds are summarized in Fig. 5, the general trend
remains the same, and we highlight only the main differences. First, the gap between Individual
and Shared is much smaller, in some tasks one is better, and in other tasks the other is better.
Additionally, for the smallest number of tasks (left) Individual is better with a gap of ? 1.5%,
while for largest number of tasks Individual is worse with a gap ranging between 1 ? 4%. This
is exactly where the effect of variance of small datasets became stronger than the bias emerging
from sharing. Second, in general these dataset are more heterogeneous, as indicated by the larger
standard-deviation (longer error-bars than in Fig. 4). As before SHAMO performs the best, with
optimal performance when k = 3 ? 4 and is almost not overfitting for larger values of k, as the
?effective? number of models grows slowly with k.
Summary
We described theoretical framework for multitask learning using small number of shared models.
Our theory suggests that many-tasks can be used to compensate for small number of training examples per task, if one can partition that tasks to few sets, with similar labeling function per set. We
also derived a K-means-like algorithm to learn such classifiers of both models and association of
taks and models. Our experimental results on both hand-crafted problems and real-world sentiment
classification problem strongly support the benefits from the approach, even with very few examples
per task. We plan to extend our theory to direct of the optimal splitting of the training data by the
algorithm, analyze its convergence properties and perform extensive experiments. We also plan to
derive theory and algorithms for soft association of tasks to classifiers.
Acknowledgements: The research is partially supported by a grants from ISF, BSF and European
Union grant IRG-256479.
8
References
[1] Yonatan Amit, Michael Fink, Nathan Srebro, and Shimon Ullman. Uncovering shared structures in multiclass classification. In ICML, pages 17?24, 2007.
[2] Rie Kubota Ando and Tong Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817?1853, 2005.
[3] Martin Anthony and Peter L. Bartlett. Neural Network Learning: Theoretical Foundations.
Cambridge University Press, 1999.
[4] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature
learning. Machine Learning, 73(3):243?272, 2008.
[5] Bart Bakker and Tom Heskes. Task clustering and gating for bayesian multitask learning.
Journal of Machine Learning Research, 4:83?99, 2003.
[6] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine Learning,
79(1-2):151?175, 2010.
[7] Gilles Blanchard, Gyemin Lee, and Clay Scott. Generalizing from several related classification
tasks to a new unlabeled sample. In NIPS, 2011.
[8] John Blitzer, Mark Dredze, and Fernando Pereira. Biographies, bollywood, boom-boxes and
blenders: Domain adaptation for sentiment classification. In Association for Computational
Linguistics (ACL), 2007.
[9] Edwin V. Bonilla, Felix V. Agakov, and Christopher K. I. Williams. Kernel multi-task learning
using task-specific features. Journal of Machine Learning Research - Proceedings Track, 2:43?
50, 2007.
[10] Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from multiple sources.
Journal of Machine Learning Research, 9:1757?1774, 2008.
[11] Koby Crammer, Michael J. Kearns, and Jennifer Wortman. Learning from data of variable
quality. In NIPS, 2005.
[12] Theodoros Evgeniou, Charles A. Micchelli, and Massimiliano Pontil. Learning multiple tasks
with kernel methods. Journal of Machine Learning Research, 6:615?637, 2005.
[13] Theodoros Evgeniou and Massimiliano Pontil. Regularized multi?task learning. In KDD,
pages 109?117, 2004.
[14] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. In
Proceedings of the Eleventh Annual Conference on Computational Learning Theory, 1998. To
appear, Machine Learning.
[15] Hal Daum?e III. Frustratingly easy domain adaptation. In ACL, 2007.
[16] Hal Daum?e III. Bayesian multitask learning with latent hierarchies. In UAI, pages 135?142,
2009.
[17] Neil D. Lawrence and John C. Platt. Learning to learn with the informative vector machine. In
ICML, 2004.
[18] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation with multiple
sources. In NIPS, pages 1041?1048, 2008.
[19] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning
bounds and algorithms. In COLT, 2009.
[20] Guillaume Obozinski, Ben Taskar, and Michael I. Jordan. Joint covariate selection and joint
subspace selection for multiple classification problems. Statistics and Computing, 20(2):231?
252, 2010.
[21] Kai Yu, Volker Tresp, and Anton Schwaighofer. Learning gaussian processes from multiple
tasks. In ICML, pages 1012?1019, 2005.
[22] Shipeng Yu, Volker Tresp, and Kai Yu. Robust multi-task learning with t-processes. In ICML,
pages 1103?1110, 2007.
9
| 4496 |@word mild:1 multitask:3 version:2 middle:2 bigram:1 norm:1 seems:1 stronger:1 blender:1 covariance:4 contains:1 envision:1 outperforms:1 nt:5 yet:10 john:3 realistic:1 partition:3 informative:1 kdd:1 remove:1 plot:5 v:15 bart:1 half:5 isotropic:2 location:1 theodoros:3 zhang:2 shatter:2 direct:1 combine:3 fitting:2 eleventh:1 indeed:3 nor:1 multi:6 increasing:3 bounded:1 suffice:1 panel:12 lowest:2 israel:2 bakker:1 emerging:1 developed:3 finding:3 transformation:1 ended:1 xd:1 fink:1 exactly:3 classifier:16 wrong:2 platt:1 control:1 grant:2 appear:3 superiority:1 before:9 felix:1 cuisine:1 modify:1 might:3 black:2 twice:2 acl:2 suggests:1 ease:1 averaged:1 union:3 practice:2 procedure:3 pontil:3 empirical:3 confidence:2 word:2 seeing:2 get:1 unlabeled:3 close:2 selection:3 risk:1 applying:2 vaughan:1 map:3 reviewer:1 yt:4 go:2 williams:1 convex:4 focused:1 simplicity:1 amazon:1 splitting:1 bsf:1 notion:1 yishay:3 target:2 pt:1 hierarchy:1 hypothesis:34 associate:1 logarithmically:1 trend:1 agakov:1 predicts:1 labeled:1 bottom:4 cloud:2 taskar:1 electrical:1 worst:2 highest:2 ran:2 pd:1 complexity:2 moderately:2 n1t:1 raise:1 tight:1 weakly:1 predictive:1 dilemma:1 learner:1 edwin:1 easily:1 joint:2 various:3 represented:1 massimiliano:3 describe:3 effective:20 labeling:13 choosing:1 outcome:2 irg:1 larger:5 kai:2 gyemin:1 statistic:1 neil:1 final:1 online:1 rr:3 took:2 propose:3 product:9 coming:2 adaptation:8 iff:1 description:1 getting:2 convergence:2 cluster:2 rotated:1 ben:3 derive:5 develop:2 ac:2 fixing:1 amazing:1 shamo:34 ij:6 minor:1 school:1 depending:1 strong:2 implemented:1 come:1 implies:4 met:1 indicate:1 direction:1 modifying:1 vc:26 enable:1 st1:7 fix:3 generalization:11 hold:1 deciding:1 great:1 lawrence:1 mapping:6 predict:5 dictionary:1 smallest:2 fh:7 outperformed:1 bag:1 label:4 s12:1 individually:2 largest:2 agrees:1 repetition:2 minimization:1 clearly:3 gaussian:4 pnt:2 always:1 rather:2 hj:8 volker:2 linguistic:1 corollary:6 derived:2 focus:1 indicates:1 hk:28 attains:2 rostamizadeh:2 shattered:2 entire:1 arg:2 classification:9 uncovering:1 colt:1 denoted:1 priori:1 plan:2 initialize:1 equal:3 once:3 evgeniou:5 having:5 enginering:1 koby:5 icml:4 yu:3 discrepancy:1 report:1 inherent:1 few:12 composed:1 simultaneously:2 individual:41 phase:1 ando:2 interest:2 evaluation:1 severe:1 introduces:2 yielding:1 hg:2 sauer:1 orthogonal:1 conduct:1 haifa:1 circle:1 theoretical:3 increased:3 classify:2 soft:1 assignment:1 deviation:2 predictor:3 technion:2 uniform:2 predicate:1 wortman:3 conducted:1 dependency:2 connect:1 synthetic:8 st:12 lee:1 pool:6 michael:4 together:3 opposed:1 choose:1 slowly:1 worse:2 book:1 ullman:1 star:7 summarized:5 includes:1 blanchard:2 boom:1 bonilla:1 performed:6 later:2 h1:4 observing:1 analyze:1 red:1 start:2 aggregation:1 shai:1 il:2 ni:1 square:1 became:1 qk:1 variance:4 maximized:1 yield:3 anton:1 bayesian:3 iterated:1 sharing:1 naturally:1 associated:4 proof:3 gain:1 sampled:1 proved:1 dataset:11 knowledge:1 ut:2 clay:1 ok:1 higher:1 dt:5 originally:1 methodology:2 tom:1 rie:1 formulation:1 evaluated:7 though:1 strongly:3 shrink:1 furthermore:2 box:1 stage:6 until:1 hand:1 christopher:1 overlapping:2 quality:1 indicated:1 aviv:2 grows:3 dredze:1 usage:2 building:2 name:2 contain:1 true:3 effect:1 hal:2 regularization:1 assigned:3 hence:4 blitzer:3 during:1 criterion:1 tn:1 performs:10 ranging:4 charles:1 association:10 occurred:1 extend:1 isf:1 significant:1 refer:2 cambridge:1 heskes:1 language:1 longer:1 etc:1 j:1 recent:2 thresh:1 yonatan:1 binary:2 minimum:1 greater:1 somewhat:1 aggregated:1 fernando:2 dashed:1 ii:2 multiple:7 full:1 reduces:1 x10:1 r40:1 compensate:2 long:1 divided:1 prediction:6 basic:1 heterogeneous:2 essentially:1 iteration:5 represent:1 kernel:2 addition:1 interval:2 source:3 concluded:1 unlike:2 induced:1 jordan:1 call:2 ee:1 split:1 enough:3 iii:2 easy:1 restaurant:1 associating:3 suboptimal:1 andreas:1 idea:1 regarding:1 multiclass:1 x40:1 whether:1 bartlett:1 sentiment:14 peter:1 returned:1 cause:1 clear:3 amount:5 ten:3 category:3 schapire:1 dotted:1 sign:2 per:14 correctly:2 track:1 blue:1 affected:3 four:1 threshold:12 drawn:6 shatters:1 r10:1 fraction:1 year:1 run:1 you:1 uncertainty:1 almost:3 bound:30 hi:4 followed:1 annual:1 alex:1 x2:3 flat:1 dvd:1 calling:2 nathan:1 min:3 performing:4 martin:1 kubota:1 department:1 combination:2 poor:1 kd:23 across:3 smaller:6 em:3 partitioned:2 intuitively:2 explained:2 pr:1 beween:1 remains:2 describing:1 jennifer:3 fed:1 end:1 available:1 alternative:1 batch:1 top:4 clustering:3 remaining:5 linguistics:1 superlative:1 daum:2 build:7 amit:2 establish:1 micchelli:1 objective:1 added:1 diagonal:1 subspace:1 distance:2 separate:1 mapped:2 majority:1 whom:1 assuming:3 afshin:2 index:1 illustration:1 x20:1 executed:1 potentially:3 perform:4 allowing:3 upper:10 gilles:1 datasets:8 st2:5 immediate:1 mansour:6 arbitrary:1 introduced:1 david:2 namely:1 pair:2 extensive:1 sentence:1 disappointment:1 learned:1 nip:3 bar:3 below:10 scott:1 appeared:1 kulesza:1 challenge:1 built:1 reliable:1 tau:1 max:2 explanation:1 satisfaction:1 natural:2 force:1 treated:1 regularized:1 improve:1 technology:1 imply:1 axis:4 tresp:2 text:3 review:14 prior:1 acknowledgement:1 relative:3 freund:1 loss:1 expect:1 highlight:1 mixed:2 generation:2 srebro:1 foundation:1 downloaded:1 consistent:2 row:5 summary:2 mohri:2 supported:1 last:2 bias:3 allow:1 perceptron:2 institute:1 benefit:2 minkj:2 dimension:26 world:2 avoids:1 ending:1 computes:1 feeling:1 far:1 r20:1 overfitting:6 uai:1 assumed:1 xi:3 iterative:1 latent:1 why:1 table:4 additionally:4 frustratingly:1 learn:13 mj:5 robust:1 tel:2 mehryar:2 european:1 shipeng:1 anthony:1 domain:21 bollywood:1 main:5 spread:1 noise:1 daume:1 suffering:1 repeated:1 allowed:2 body:2 x1:5 fig:9 crafted:1 tong:1 originated:1 pereira:2 lie:1 third:1 learns:1 dozen:1 theorem:2 shimon:1 xt:4 specific:3 jt:2 showing:1 gating:1 covariate:1 dk:9 derives:1 exists:1 effectively:1 margin:1 gap:7 generalizing:1 simply:2 schwaighofer:1 partially:1 obozinski:2 goal:6 presentation:1 shared:58 price:1 hard:11 included:1 specifically:2 uniformly:1 lemma:1 kearns:2 called:4 total:4 experimental:3 indicating:3 select:2 formally:3 guillaume:1 support:4 mark:1 crammer:5 evaluate:1 argyriou:2 biography:1 |
3,862 | 4,497 | Emergence of Object-Selective Features in
Unsupervised Feature Learning
Adam Coates, Andrej Karpathy, Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
{acoates,karpathy,ang}@cs.stanford.edu
Abstract
Recent work in unsupervised feature learning has focused on the goal of discovering high-level features from unlabeled images. Much progress has been made in
this direction, but in most cases it is still standard to use a large amount of labeled
data in order to construct detectors sensitive to object classes or other complex
patterns in the data. In this paper, we aim to test the hypothesis that unsupervised
feature learning methods, provided with only unlabeled data, can learn high-level,
invariant features that are sensitive to commonly-occurring objects. Though a
handful of prior results suggest that this is possible when each object class accounts for a large fraction of the data (as in many labeled datasets), it is unclear
whether something similar can be accomplished when dealing with completely
unlabeled data. A major obstacle to this test, however, is scale: we cannot expect
to succeed with small datasets or with small numbers of learned features. Here,
we propose a large-scale feature learning system that enables us to carry out this
experiment, learning 150,000 features from tens of millions of unlabeled images.
Based on two scalable clustering algorithms (K-means and agglomerative clustering), we find that our simple system can discover features sensitive to a commonly
occurring object class (human faces) and can also combine these into detectors invariant to significant global distortions like large translations and scale.
1
Introduction
Many algorithms are now available to learn hierarchical features from unlabeled image data. There
is some evidence that these algorithms are able to learn useful high-level features without labels, yet
in practice it is still common to train such features from labeled datasets (but ignoring the labels), and
to ultimately use a supervised learning algorithm to learn to detect more complex patterns that the
unsupervised learning algorithm is unable to find on its own. Thus, an interesting open question is
whether unsupervised feature learning algorithms are able to construct features, without the benefit
of supervision, that can identify high-level concepts like frequently-occurring object classes. It is
already known that this can be achieved when the dataset is sufficiently restricted that object classes
are clearly defined (typically closely cropped images) and occur very frequently [13, 21, 22]. In this
work we aim to test whether unsupervised learning algorithms can achieve a similar result without
any supervision at all.
The setting we consider is a challenging one. We have harvested a dataset of 1.4 million image
thumbnails from YouTube and extracted roughly 57 million 32-by-32 pixel patches at random locations and scales. These patches are very different from those found in labeled datasets like CIFAR10 [9]. The overwhelming majority of patches in our dataset appear to be random clutter. In the
cases where such a patch contains an identifiable object, it may well be scaled, arbitrarily cropped,
or uncentered. As a result, it is very unclear where an ?object class? begins or ends in this type of
patch dataset, and less clear that a completely unsupervised learning algorithm could manage to cre1
ate ?object-selective? features able to distinguish an object from the wide variety of clutter without
some other type of supervision.
In order to have some hope of success, we can identify several key properties that our learning
algorithm should likely have. First, since identifiable objects show up very rarely, it is clear that
we are obliged to train from extremely large datasets. We have no way of controlling how often
a particular object shows up and thus enough data must be used to ensure that an object class is
seen many times?often enough that it cannot be disregarded as random clutter. Second, we are
also likely to need a very large number of features. Training too few features will cause us to
?under-fit? the distribution, forcing the learning algorithm to ignore rare events like objects. Finally,
as is already common in feature learning work, we should aim to build features that incorporate
invariance so that features respond not just to a specific pattern (e.g., an object at a single location
and scale), but to a range of patterns that collectively belong to the same object class (e.g., the same
object seen at many locations and scales). Unfortunately, these desiderata are difficult to achieve at
once: current methods for building invariant hierarchies of features are difficult to scale up to train
many thousands of features from our 57 million patch dataset on our cluster of 30 machines.
In this paper, we will propose a highly scalable combination of clustering algorithms for learning
selective and invariant features that are capable of tackling this size of problem. Surprisingly, we
find that despite the simplicity of these algorithms we are nevertheless able to discover high-level
features sensitive to the most commonly occurring object class present in our dataset: human faces.
In fact, we find that these features are better face detectors than a linear filter trained from labeled
data, achieving up to 86% AUC compared to 77% on labeled validation data. Thus, our results emphasize that not only can unsupervised learning algorithms discover object-selective features with
no labeled data, but that such features can potentially perform better than basic supervised detectors
due to their deep architecture. Though our approach is based on fast clustering algorithms (K-means
and agglomerative clustering), its basic behavior is essentially similar to existing methods for building invariant feature hierarchies, suggesting that other popular feature learning methods currently
available may also be able to achieve such results if run at large enough scale. Indeed, recent work
with a more sophisticated (but vastly more expensive) feature-learning algorithm appears to achieve
similar results [11] when presented with full-frame images.
We will begin with a description of our algorithms for learning selective and invariant features, and
explain their relationship to existing systems. We will then move on to presenting our experimental
results. Related results and methods to our own will be reviewed briefly before concluding.
2
Algorithm
Our system is built on two separate learning modules: (i) an algorithm to learn selective features
(linear filters that respond to a specific input pattern), and (ii) an algorithm to combine the selective
features into invariant features (that respond to a spectrum of gradually changing patterns). We
will refer to these features as ?simple cells? and ?complex cells? respectively, in analogy to previous
work and to biological cells with (very loosely) related response properties. Following other popular
systems [14, 12, 6, 5] we will then use these two algorithms to build alternating layers of simple cell
and complex cell features.
2.1
Learning Selective Features (Simple Cells)
The first module in our learning system trains a bank of linear filters to represent our selective
?simple cell? features. For this purpose we use the K-means-like method used by [2], which has
previously been used for large-scale feature learning.
The algorithm is given a set of input vectors x(i) ? <n , i = 1, . . . , m. These vectors are preprocessed by removing the mean and normalizing each example, then performing PCA whitening.
We then learn a dictionary D ? <n?d of linear filters as in [2] by alternating optimization over
filters D and ?cluster assignments? C:
minimize ||DC (i) ? x(i) ||22
D,C
subject to ||D(j) ||2 = 1, ?j,
and ||C (i) ||0 ? 1, ?i.
2
Here the constraint ||C (i) ||0 ? 1 means that the vectors C (i) , i = 1, . . . , m are allowed to contain
only a single non-zero, but the non-zero value is otherwise unconstrained. Given the linear filters
D, we then define the responses of the learned simple cell features as s(i) = g(a(i) ) where a(i) =
D> x(i) and g(?) is a nonlinear activation function. In our experiments we will typically use g(a) =
|a| for the first layer of simple cells, and g(a) = a for the second.1
2.2
Learning Invariant Features (Complex Cells)
To construct invariant complex cell features a common approach is to create ?pooling units? that
combine the responses of lower-level simple cells. In this work, we use max-pooling units [14, 13].
Specifically, given a vector of simple cell responses s(i) , we will train complex cell features whose
responses are given by:
(i)
(i)
cj = max sk
k?Gj
where Gj is a set that specifies which simple cells the j?th complex cell should pool over. Thus, the
complex cell cj is an invariant feature that responds significantly to any of the patterns represented
by simple cells in its group.
Each group Gj should specify a set of simple cells that are, in some sense, similar to one another.
In convolutional neural networks [12], for instance, each group is hard-coded to include translated
copies of the same filter resulting in complex cell responses cj that are invariant to small translations.
Some algorithms [6, 3] fix the groups Gj ahead of time then optimize the simple cell filters D
so that the simple cells in each group share a particular form of statistical dependence. In our
system, we will use linear correlation of simple cell responses as our similarity metric, E[ak al ], and
construct groups Gj that combine similar features according to this metric. Computing the similarity
directly would normally require us to estimate the correlations from data, but since the inputs x(i)
are whitened we can instead compute the similarity directly from the filter weights:
>
E[ak al ] = E[D(k)> x(i) x(i) D(l) ] = D(k)> D(l) .
For convenience in the following,
p we will actually use the dissimilarity between features, defined as
d(k, l) = ||D(k) ? D(l) ||2 = 2 ? 2E[ak al ].
To construct the groups G, we will use a version of single-link agglomerative clustering to combine
sets of features that have low dissimilarity according to d(k, l).2 To construct a single group G0 we
begin by choosing a random simple cell filter, say D(k) , as the first member. We then search for
candidate cells to be added to the group by computing d(k, l) for each simple cell filter D(l) and add
D(l) to the group if d(k, l) is less than some limit ? . The algorithm then continues to expand G0 by
adding any additional simple cells that are closer than ? to any one of the simple cells already in the
group. This procedure continues until there are no more cells to be added, or until the diameter of
the group (the dissimilarity between the two furthest cells in the group) reaches a limit ?.3
This procedure can be executed, quite rapidly, in parallel for a large number of randomly chosen
simple cells to act as the ?seed? cell, thus allowing us to train many complex cells at once. Compared to the simple cell learning procedure, the computational cost is extremely small even for our
rudimentary implementation. In practice, we often generate many groups (e.g., several thousand)
and then keep only a random subset of the largest groups. This ensures that we do not end up with
many groups that pool over very few simple cells (and hence yield complex cells cj that are not
especially invariant).
2.3
Algorithm Behavior
Though it seems plausible that pooling simple cells with similar-looking filters according to d(k, l)
as above should give us some form of invariant feature, it may not yet be clear why this form of
1
This allows us to train roughly half as many simple cell features for the first layer.
Since the first layer uses g(a) = |a|, we actually use d(k, l) = min{||D(k) ? D(l) ||2 , ||D(k) + D(l) ||2 }
to account for ?D(l) and +D(l) being essentially the same feature.
3
We use ? = 0.3 for the first layer of complex cells and ? = 1.0 for the second layer. These were chosen
?
by examining the typical distance between a filter D(k) and its nearest neighbor. We use ? = 1.5 > 2 so
that a complex cell group may include orthogonal filters but cannot grow without limit.
2
3
invariance is desirable. To explain, we will consider a simple ?toy? data distribution where the
behavior of these algorithms is more clear. Specifically, we will generate three heavy-tailed random
variables X, Y, Z according to:
?1 , ?2 ? L(0, ?)
e1 , e2 , e3 ? N (0, 1)
X = e1 ?1 , Y = e2 ?1 , Z = e3 ?2
Here, ?1 , ?2 are scale parameters sampled independently from a Laplace distribution, and e1 , e2 , e3
are sampled independently from a unit Gaussian. The result is that Z is independent of both X and
Y , but X and Y are not independent due to their shared scale parameter ?1 [6]. An isocontour of
the density of this distribution is shown in Figure 1a.
Other popular algorithms [6, 5, 3] for learning complex-cell features are designed to identify X and
Y as features to be pooled together due to the correlation in their energies (scales). One empirical
motivation for this kind of invariance comes from natural images: if we have three simple-cell filter
responses a1 = D(1)> x, a2 = D(2)> x, a3 = D(3)> x where D(1) and D(2) are Gabor filters in
quadrature phase, but D(3) is a Gabor filter at a different orientation, then the responses a1 , a2 , a3
will tend to have a distribution very similar to the model of X, Y, Z above [7]. By pooling together
the responses of a1 and a2 a complex cell is able to detect an edge of fixed orientation invariant
to small translations. This model also makes sense for higher-level invariances where X and Y do
not merely represent responses of linear filters on image patches but feature responses in a deep
network. Indeed, the X?Y plane in Figure 1a is referred to as an ?invariant subspace? [8].
Our combination of simple cell and complex cell learning algorithms above tend to learn this same
type of invariance. After whitening and normalization, the data points X, Y, Z drawn from the
distribution above will lie (roughly) on a sphere. The density of these data points is pictured in
Figure 1b, where it can be seen that the highest density areas are in a ?belt? in the X?Y plane and
at the poles along the Z axis with a low-density region in between. Application of our K-means
clustering method to this data results in centroids shown as ? marks in Figure 1b. From this picture
it is clear what a subsequent application of our single-link clustering algorithm will do: it will try to
string together the centroids around the ?belt? that forms the invariant subspace and avoid connecting
them to the (distant) centroids at the poles. Max-pooling over the responses of these filters will result
in a complex cell that responds consistently to points in the X?Y plane, but not in the Z direction?
that is, we end up with an invariant feature detector very similar to those constructed by existing
methods. Figure 1c depicts this result, along with visualizations of the hypothetical gabor filters
D(1) , D(2) , D(3) described above that might correspond to the learned centroids.
(a)
(b)
(c)
Figure 1: (a) An isocontour of a sparse probability distribution over variables X, Y, and Z. (See text
for details.) (b) A visualization of the spherical density obtained from the distribution in (a) after
normalization. Red areas are high density and dark blue areas are low density. Centroids learned
by K-means from this data are shown on the surface of the sphere as * marks. (c) A pooling unit
identified by applying single-link clustering to the centroids (black links join pooled filters). (See
text.)
2.4
Feature Hierarchy
Now that we have defined our simple and complex cell learning algorithms, we can use them to train
alternating layers of selective and invariant features. We will train 4 layers total, 2 of each type. The
architecture we use is pictured in Figure 2a.
4
(a)
(b)
Figure 2: (a) Cross-section of network architecture used for experiments. Full layer sizes are shown
at right. (b) Randomly selected 128-by-96 images from our dataset.
Our first layer of simple cell features are locally connected to 16 non-overlapping 8-by-8 pixel
patches within the 32-by-32 pixel image. These features are trained by building a dataset of 8-by8 patches and passing them to our simple cell learning procedure to train 6400 first-layer filters
D ? <64?6400 . We apply our complex cell learning procedure to this bank of filters to find 128
pooling groups G1 , G2 , . . . , G128 . Using these results, we can extract our simple cell and complex
cell features from each 8-by-8 pixel subpatch of the 32-by-32 image. Specifically, the linear filters D
(p)
are used to extract the first layer simple cell responses si = g(D(i)> x(p) ) where x(p) , p = 1, .., 16
are the 16 subpatches of the 32-by-32 image. We then compute the complex cell feature responses
(p)
(p)
cj = maxk?Gj sk for each patch.
Once complete, we have an array of 128-by-4-by-4 = 2048 complex cell responses c representing
each 32-by-32 image. These responses are then used to form a new dataset from which to learn a
second layer of simple cells with K-means. In our experiments we train 150,000 second layer simple
? and the second layer simple cell responses
cells. We denote the second layer of learned filters as D,
? > c. Applying again our complex cell learning procedure to D,
? we obtain pooling groups
as s? = D
? and complex cells c? defined analogously.
G,
3
Experiments
As described above, we ran our algorithm on patches harvested from YouTube thumbnails downloaded from the web. Specifically, we downloaded the thumbnails for over 1.4 million YouTube
videos4 , some of which are shown in Figure 2b. These images were downsampled to 128-by-96
pixels and converted to grayscale. We cropped 57 million randomly selected 32-by-32 pixel patches
from these images to form our unlabeled training set. No supervision was used?thus most patches
contain partial views of objects or clutter at differing scales. We ran our algorithm on these images
using a cluster of 30 machines over 3 days?virtually all of the time spent training the 150,000
second-layer features.5 We will now visualize these features and check whether any of them have
learned to identify an object class.
3.1
Low-Level Simple and Complex Cell Visualizations
We visualize the learned low-level filters D and pooling groups G to verify that they are, in fact,
similar to those learned by other well-known algorithms. It is already known that our K-meansbased algorithm learns simple-cell-like filters (e.g., edge-like features, as well as spots, curves) as
shown in Figure 3a.
To visualize the learned complex cells we inspect the simple cell filters that belong to each of the
pooling groups. The filters for several pooling groups are shown in Figure 3b. As expected, the filters
cover a spectrum of similar image structures. Though many pairs of filters are extremely similar6 ,
4
We cannot select videos at random, so we query videos under each YouTube category (?Pets & Animals?,
?Science & Technology?, etc.) along with a date (e.g., ?January 2001?).
5
Though this is a fairly long run, we note that 1 iteration of K-means is cheaper than a single batch gradient
step for most other methods able to learn high-level invariant features. We expect that these experiments would
be impossible to perform in a reasonable amount of time on our cluster with another algorithm.
6
Some filters have reversed polarity due to our use of absolute-value rectification during training of the first
layer.
5
there are also other pairs that differ significantly yet are included in the group due to the singlelink clustering method. Note that some of our groups are composed of similar edges at differing
locations, and thus appear to have learned translation invariance as expected.
3.2
Higher-Level Simple and Complex Cells
Finally, we inspect the learned higher layer simple cell and complex cell features, s? and c?, particularly to see whether any of them are selective for an object class. The most commonly occurring
object in these video thumbnails is human faces (even though we estimate that much less than 0.1%
of patches contain a well-framed face). Thus we search through our learned features for cells that are
selective for human faces at varying locations and scales. To locate such features we use a dataset
of labeled images: several hundred thousand non-face images as well as tens of thousands of known
face images from the ?Labeled Faces in the Wild? (LFW) dataset [4].7
To test whether any of the s? simple cell features are selective for faces, we use each feature by itself
as a ?detector? on the labeled dataset: we compute the area under the precision-recall curve (AUC)
obtained when each feature?s response s?i is used as a simple classifier. Indeed, it turns out that there
are a handful of high-level features that tend to be good detectors for faces. The precision-recall
curves for the best 5 detectors are shown in Figure 3c (top curves); the best of these achieves 86%
AUC. We visualize 16 of the simple cell features identified by this procedure8 in Figure 4a along
with a sampling of the image patches that activate the first of these cells strongly. There it can be
seen that these simple cells are selective for faces located at particular locations and scales. Within
each group the faces differ slightly due to the learned invariance provided by the complex cells in
the lower layer (and thus the mean of each group of images is blurry).
1
0.9
Precision
0.8
0.7
0.6
0.5
0.4
0
(a)
(b)
0.1
0.2
0.3
0.4
0.5
Recall
0.6
0.7
0.8
0.9
1
(c)
Figure 3: (a) First layer simple cell filters learned by K-means. (b) Sets of simple cell filters belonging to three pooling groups learned by our complex cell training algorithm. (c) Precision-Recall
curves showing selectivity for human faces of 5 low-level simple cells trained from a full 32-by-32
patch (red curves, bottom) versus 5 higher-level simple cells (green curves, top). Performance of the
best linear filter found by SVM from labeled data is also shown (black dotted curve, middle).
It may appear that this result could be obtained by applying our simple cell learning procedure
directly to full 32-by-32 images without any attempts at incorporating local invariance. That is,
rather than training D (the first-layer filters) from 8-by-8 patches, we could try to train D directly
from the 32-by-32 images. This turns out not to be successful. The lower curves in Figure 3c are the
precision-recall curves for the best 5 simple cells found in this way. Clearly the higher-level features
are dramatically better detectors than simple cells built directly from pixels9 (only 64% AUC).
7
Our positive face samples include the entire set of labeled faces, plus randomly scaled and translated
copies.
8
We visualize the higher-level features by averaging together the 100 unlabeled images from our YouTube
dataset that elicit the strongest activation.
9
These simple cells were trained by applying K-means to normalized, whitened 32-by-32 pixel patches from
a smaller unlabeled set known to have a higher concentration of faces. Due to this, a handful of centroids look
roughly like face exemplars and act as simple ?template matchers?. When trained on the full dataset (which
contains far fewer faces), K-means learns only edge and arc features which perform much worse (about 45%
AUC).
6
AUC
Best 32-by-32 simple cell
64%
Best in s?
86%
Best in c?
80%
Supervised Linear SVM
77%
Table 1: Area under PR curve for different cells on our face detection validation set. Only the SVM
uses labeled data.
(a)
(b)
(c)
(d)
Figure 4: Visualizations. (a) A collection of patches from our unlabeled dataset that maximally
activate one of the high-level simple cells from s?. (b) The mean of the top stimuli for a handful
of face-selective cells in s?. (c) Visualization of the face-selective cells that belong to one of the
? (d) A collection
complex cells in c? discovered by the single-link clustering algorithm applied to D.
of unlabeled patches that elicit a strong response from the complex cell visualized in (c) ? virtually
all are faces, at a variety of scales and positions. Compare to (a).
As a second control experiment we train a linear SVM from half of the labeled data using only
pixels as input (contrast-normalized and whitened). The PR curve for this linear classifier is shown
in Figure 3c as a black dotted line. There we see that the supervised linear classifier is significantly
better (77% AUC) than the 32-by-32 linear simple cells. On the other hand, it does not perform as
well as the higher level simple cells learned by our system even though it is likely the best possible
linear detector.
Finally, we inspect the higher-level complex cells learned by the applying the same agglomerative
clustering procedure to the higher-level simple cell filters. Due to the invariance introduced at the
lower layers, two simple cells that detect faces at slightly different locations or scales will often have
very similar filter weights and thus we expect our algorithm to find and combine these simple cells
into higher-level invariant features cells.
To visualize our higher-level complex cell features c?, we can simply look at visualizations for all of
? These visualizations show us the set of patches that strongly
the simple cells in each of the groups G.
activate each simple cell, and hence also activate the complex cell. The results of such a visualization
for one group that was found to contain only face-selective cells is shown in Figure 4c. There it can
be seen that this single ?complex cell? selects for faces at multiple positions and scales. A sampling
of image patches collected from the unlabeled data that strongly activate the corresponding complex
cell are shown in Figure 4d. We see that the complex cell detects many faces but at a much wider
variety of positions and scales compared to the simple cells, demonstrating that even ?higher level?
invariances are being captured, including scale invariance. Benchmarked on our labeled set, this
complex cell achieves 80.0% AUC?somewhat worse than the very best simple cells, but still in the
top 10 performing cells in the entire network. Interestingly, the qualitative results in Figure 4d are
excellent, and we believe these images represent an even greater range of variations than those in
the labeled set. Thus the 80% AUC number may somewhat under-rate the quality of these features.
These results suggest that the basic notions of invariance and selectivity that underpin popular feature
learning algorithms may be sufficient to discover the kinds of high-level features that we desire,
possibly including whole object classes robust to local and global variations. Indeed, using simple
implementations of selective and invariant features closely related to existing algorithms, we have
found that is possible to build features with high selectivity for a coherent, commonly occurring
object class. Though human faces occur only very rarely in our very large dataset, it is clear that the
complex cell visualized Figure 4d is adept at spotting them amongst tens of millions of images. The
enabler for these results is the scalability of the algorithms we have employed, suggesting that other
systems can likely achieve similar results to the ones shown here if their computational limitations
are overcome.
7
4
Related Work
The method that we have proposed has close connections to a wide array of prior work. For instance,
the basic notions of selectivity and invariance that drive our system can be identified in many other
algorithms: Group sparse coding methods [3] and Topographic ICA [6, 7] build invariances by
pooling simple cells that lie in an invariant subspace, identified by strong scale correlations between
cell responses. The advantage of this criterion is that it can determine which features to pool together
even when the simple cell filters are orthogonal (where they would be too far apart for our algorithm
to recognize their relationship). Our results suggest that while this type of invariance is very useful,
there exist simple ways of achieving a similar effect.
Our approach is also connected with methods that attempt to model the geometric (e.g., manifold)
structure of the input space. For instance, Contractive Auto-Encoders [16, 15], Local Coordinate
Coding [20], and Locality-constrained Linear Coding [19] learn sparse linear filters while attempting
to model the manifold structure staked out by these filters (sometimes termed ?anchor points?).
One interpretation of our method, suggested by Figure 1b, is that with extremely overcomplete
dictionaries it is possible to use trivial distance calculations to identify neighboring points on the
manifold. This in turn allows us to construct features invariant to shifts along the manifold with
little effort. [1] use similar intuitions to propose a clustering method similar to our approach.
One of our key results, the unsupervised discovery of features selective for human faces is fairly
unique (though seen recently in the extremely large system of [11]). Results of this kind have
appeared previously in restricted settings. For instance, [13] trained Deep Belief Network models
that decomposed object classes like faces and cars into parts using a probabilistic max-pooling to
gain translation invariance. Similarly, [21] has shown results of a similar flavor on the Caltech
recognition datasets. [22] showed that a probabilistic model (with some hand-coded geometric
knowledge) can recover clusters containing 20 known object class silhouettes from outlines in the
LabelMe dataset. Other authors have shown the ability to discover detailed manifold structure (e.g.,
as seen in the results of embedding algorithms [18, 17]) when trained in similarly restricted settings.
The structure that these methods discover, however, is far more apparent when we are using labeled,
tightly cropped images. Even if we do not use the labels themselves the labeled examples are, by
construction, highly clustered: faces will be separated from other objects because there are no partial
faces or random clutter. In our dataset, no supervision is used except to probe the representation post
hoc.
Finally, we note the recent, extensive findings of Le et al. [11]. In that work an extremely large 9layer neural network based on a TICA-like learning algorithm [10, 6] is also capable of identifying
a wide variety of object classes (including cats and upper-bodies of people) seen in YouTube videos.
Our results complement this work in several key ways. First, by training on smaller randomly
cropped patches, we show that object-selectivity may still be obtained even when objects are almost
never framed properly within the image?ruling out this bias as the source of object-selectivity. Second, we have shown that the key concepts (sparse selective filters and invariant-subspace pooling)
used in their system can also be implemented in a different way using scalable clustering algorithms, allowing us to achieve results reminiscent of theirs using a vastly smaller amount of computing power. (We used 240 cores, while their large-scale system is composed of 16,000 cores.) In
combination, these results point strongly to the conclusion that almost any highly scalable implementation of existing feature-learning concepts is enough to discover these sophisticated high-level
representations.
5
Conclusions
In this paper we have presented a feature learning system composed of two highly scalable but otherwise very simple learning algorithms: K-means clustering to find sparse linear filters (?simple cells?)
and agglomerative clustering to stitch simple cells together into invariant features (?complex cells?).
We showed that these two components are, in fact, capable of learning complicated high-level representations in large scale experiments on unlabeled images pulled from YouTube. Specifically, we
found that higher level simple cells could learn to detect human faces without any supervision at
all, and that our complex-cell learning procedure combined these into even higher-level invariances.
These results indicate that we are apparently equipped with many of the key principles needed to
achieve such results and that a critical remaining puzzle is how to scale up our algorithms to the
sizes needed to capture more object classes and even more sophisticated invariances.
8
References
[1] Y. Boureau, N. L. Roux, F. Bach, J. Ponce, and Y. LeCun. Ask the locals: multi-way local
pooling for image recognition. In 13th International Conference on Computer Vision, pages
2651?2658, 2011.
[2] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and
vector quantization. In International Conference on Machine Learning, pages 921?928, 2011.
[3] P. Garrigues and B. Olshausen. Group sparse coding with a laplacian scale mixture prior. In
Advances in Neural Information Processing Systems 23, pages 676?684, 2010.
[4] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A
database for studying face recognition in unconstrained environments. Technical Report 0749, University of Massachusetts, Amherst, October 2007.
[5] A. Hyv?arinen and P. Hoyer. Emergence of phase-and shift-invariant features by decomposition
of natural images into independent feature subspaces. Neural Computation, 12(7):1705?1720,
2000.
[6] A. Hyv?arinen, P. Hoyer, and M. Inki. Topographic independent component analysis. Neural
Computation, 13(7):1527?1558, 2001.
[7] A. Hyv?arinen, J. Hurri, and P. Hoyer. Natural Image Statistics. Springer-Verlag, 2009.
[8] T. Kohonen. Emergence of invariant-feature detectors in self-organization. In M. Palaniswami
et al., editor, Computational Intelligence, A Dynamic System Perspective, pages 17?31. IEEE
Press, New York, 1995.
[9] A. Krizhevsky. Learning multiple layers of features from Tiny Images. Master?s thesis, Dept.
of Comp. Sci., University of Toronto, 2009.
[10] Q. Le, A. Karpenko, J. Ngiam, and A. Ng. ICA with reconstruction cost for efficient overcomplete feature learning. In Advances in Neural Information Processing Systems, 2011.
[11] Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building
high-level features using large scale unsupervised learning. In International Conference on
Machine Learning, 2012.
[12] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel.
Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:541?
551, 1989.
[13] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for
scalable unsupervised learning of hierarchical representations. In International Conference on
Machine Learning, pages 609?616, 2009.
[14] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature
neuroscience, 2, 1999.
[15] S. Rifai, Y. Dauphin, P. Vincent, Y. Bengio, and X. Muller. The manifold tangent classifier. In
Advances in Neural Information Processing, 2011.
[16] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit
invariance during feature extraction. In International Conference on Machine Learning, 2011.
[17] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326, December 2000.
[18] L. van der Maaten and G. Hinton. Visualizing high-dimensional data using t-SNE. Journal of
Machine Learning Research, 9:2579?2605, November 2008.
[19] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for
image classification. In Computer Vision and Pattern Recognition, pages 3360?3367, 2010.
[20] K. Yu, T. Zhang, and Y. Gong. Nonlinear learning using local coordinate coding. In Advances
in Neural Information Processing Systems 22, pages 2223?2231, 2009.
[21] M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and
high level feature learning. In International Conference on Computer Vision, 2011.
[22] L. Zhu, Y. Chen, A. Torralba, W. Freeman, and A. Yuille. Part and Appearance Sharing:
Recursive Compositional Models for Multi-View Multi-Object Detection. In Computer Vision
and Pattern Recognition, 2010.
9
| 4497 |@word version:1 briefly:1 middle:1 seems:1 open:1 hyv:3 decomposition:1 garrigues:1 carry:1 reduction:1 contains:2 interestingly:1 deconvolutional:1 existing:5 current:1 activation:2 yet:3 tackling:1 must:1 si:1 reminiscent:1 devin:1 subsequent:1 distant:1 enables:1 designed:1 half:2 discovering:1 selected:2 fewer:1 intelligence:1 plane:3 core:2 location:7 toronto:1 belt:2 zhang:1 along:5 constructed:1 acoates:1 qualitative:1 combine:6 wild:2 ica:2 expected:2 roughly:4 themselves:1 frequently:2 indeed:4 multi:3 subpatches:1 behavior:3 obliged:1 detects:1 spherical:1 decomposed:1 freeman:1 little:1 overwhelming:1 equipped:1 provided:2 discover:7 begin:3 what:1 kind:3 benchmarked:1 string:1 differing:2 finding:1 hypothetical:1 act:2 scaled:2 classifier:4 control:1 unit:4 normally:1 appear:3 before:1 positive:1 local:6 limit:3 despite:1 encoding:1 ak:3 might:1 black:3 plus:1 challenging:1 range:2 contractive:2 unique:1 lecun:2 practice:2 recursive:1 backpropagation:1 spot:1 procedure:9 area:5 empirical:1 elicit:2 significantly:3 gabor:3 downsampled:1 suggest:3 cannot:4 unlabeled:12 convenience:1 andrej:1 close:1 applying:5 impossible:1 optimize:1 dean:1 independently:2 focused:1 simplicity:1 identifying:1 roux:1 array:2 embedding:2 notion:2 variation:2 coordinate:2 laplace:1 controlling:1 hierarchy:3 construction:1 us:2 hypothesis:1 expensive:1 particularly:1 located:1 continues:2 recognition:7 labeled:19 database:1 bottom:1 module:2 wang:1 capture:1 thousand:4 region:1 ensures:1 connected:2 ranzato:1 highest:1 ran:2 intuition:1 environment:1 dynamic:1 ultimately:1 trained:7 yuille:1 completely:2 translated:2 represented:1 cat:1 train:13 separated:1 fast:1 activate:5 query:1 choosing:1 whose:1 quite:1 stanford:3 plausible:1 apparent:1 distortion:1 say:1 otherwise:2 ability:1 statistic:1 g1:1 topographic:2 emergence:3 itself:1 hoc:1 advantage:1 propose:3 reconstruction:1 karpenko:1 neighboring:1 kohonen:1 rapidly:1 date:1 achieve:7 roweis:1 description:1 scalability:1 cluster:5 adam:1 object:36 spent:1 wider:1 andrew:1 gong:2 exemplar:1 nearest:1 progress:1 strong:2 implemented:1 c:1 come:1 indicate:1 differ:2 direction:2 closely:2 filter:43 human:8 require:1 arinen:3 fix:1 clustered:1 biological:1 sufficiently:1 around:1 seed:1 puzzle:1 visualize:6 major:1 dictionary:2 achieves:2 a2:3 torralba:1 purpose:1 label:3 currently:1 isocontour:2 jackel:1 sensitive:4 hubbard:1 largest:1 create:1 hope:1 clearly:2 gaussian:1 aim:3 rather:1 avoid:1 varying:1 ponce:1 properly:1 consistently:1 check:1 contrast:1 centroid:7 sense:2 detect:4 typically:2 entire:2 expand:1 selective:20 selects:1 pixel:8 classification:1 orientation:2 dauphin:1 animal:1 constrained:2 fairly:2 construct:7 once:3 never:1 ng:5 sampling:2 extraction:1 look:2 unsupervised:11 yu:2 report:1 stimulus:1 few:2 randomly:5 composed:3 recognize:1 tightly:1 cheaper:1 phase:2 attempt:2 detection:2 organization:1 highly:4 henderson:1 mixture:1 edge:4 capable:3 closer:1 cifar10:1 partial:2 poggio:1 orthogonal:2 loosely:1 taylor:1 overcomplete:2 instance:4 obstacle:1 cover:1 assignment:1 cost:2 pole:2 subset:1 rare:1 hundred:1 krizhevsky:1 examining:1 successful:1 too:2 encoders:2 combined:1 density:7 international:6 amherst:1 probabilistic:2 lee:1 pool:3 together:6 connecting:1 analogously:1 thesis:1 vastly:2 again:1 manage:1 containing:1 huang:2 possibly:1 worse:2 toy:1 account:2 suggesting:2 converted:1 tica:1 pooled:2 coding:7 try:2 view:2 apparently:1 red:2 recover:1 parallel:1 complicated:1 minimize:1 palaniswami:1 convolutional:2 miller:1 yield:1 identify:5 correspond:1 handwritten:1 vincent:2 comp:1 drive:1 detector:11 explain:2 reach:1 strongest:1 sharing:1 energy:1 e2:3 sampled:2 gain:1 dataset:18 popular:4 ask:1 massachusetts:1 recall:5 knowledge:1 car:1 dimensionality:1 cj:5 sophisticated:3 actually:2 appears:1 higher:15 supervised:4 day:1 response:21 specify:1 maximally:1 though:9 strongly:4 just:1 correlation:4 until:2 hand:2 web:1 nonlinear:3 overlapping:1 quality:1 believe:1 olshausen:1 building:4 effect:1 concept:3 contain:4 verify:1 normalized:2 hence:2 alternating:3 visualizing:1 during:2 self:1 auc:9 criterion:1 presenting:1 outline:1 complete:1 rudimentary:1 image:36 recently:1 common:3 inki:1 million:7 belong:3 interpretation:1 theirs:1 significant:1 refer:1 framed:2 unconstrained:2 similarly:2 cortex:1 supervision:6 similarity:3 whitening:2 gj:6 add:1 surface:1 something:1 etc:1 own:2 recent:3 showed:2 perspective:1 apart:1 forcing:1 termed:1 selectivity:6 verlag:1 arbitrarily:1 success:1 accomplished:1 muller:2 caltech:1 der:1 seen:8 captured:1 additional:1 somewhat:2 greater:1 zip:1 employed:1 determine:1 corrado:1 ii:1 full:5 desirable:1 multiple:2 technical:1 calculation:1 cross:1 sphere:2 long:1 bach:1 post:1 e1:3 coded:2 a1:3 laplacian:1 scalable:6 desideratum:1 basic:4 whitened:3 essentially:2 metric:2 lfw:1 vision:4 iteration:1 represent:3 normalization:2 sometimes:1 monga:1 achieved:1 cell:117 cropped:5 grow:1 source:1 subject:1 pooling:16 tend:3 virtually:2 member:1 december:1 yang:1 bengio:2 enough:4 variety:4 fit:1 architecture:3 identified:4 rifai:2 shift:2 whether:6 pca:1 effort:1 e3:3 passing:1 cause:1 york:1 compositional:1 deep:4 dramatically:1 useful:2 clear:6 detailed:1 karpathy:2 amount:3 clutter:5 dark:1 ang:1 ten:3 locally:2 adept:1 visualized:2 category:1 diameter:1 mid:1 generate:2 specifies:1 exist:1 coates:2 dotted:2 neuroscience:1 thumbnail:4 blue:1 similar6:1 group:31 key:5 nevertheless:1 demonstrating:1 achieving:2 drawn:1 changing:1 preprocessed:1 merely:1 fraction:1 run:2 master:1 respond:3 almost:2 reasonable:1 ruling:1 patch:23 maaten:1 layer:25 distinguish:1 identifiable:2 occur:2 ahead:1 handful:4 constraint:1 extremely:6 concluding:1 min:1 performing:2 attempting:1 department:1 according:4 combination:3 belonging:1 ate:1 slightly:2 smaller:3 enabler:1 invariant:27 restricted:3 gradually:1 pr:2 rectification:1 visualization:8 previously:2 turn:3 needed:2 end:3 studying:1 available:2 apply:1 probe:1 hierarchical:3 denker:1 blurry:1 batch:1 top:4 clustering:16 ensure:1 include:3 remaining:1 zeiler:1 build:4 especially:1 move:1 g0:2 question:1 already:4 added:2 concentration:1 dependence:1 responds:2 unclear:2 hoyer:3 gradient:1 amongst:1 subspace:5 distance:2 unable:1 separate:1 link:5 reversed:1 majority:1 sci:1 manifold:6 agglomerative:5 collected:1 trivial:1 furthest:1 pet:1 code:1 relationship:2 polarity:1 difficult:2 unfortunately:1 executed:1 october:1 potentially:1 sne:1 underpin:1 implementation:3 perform:4 allowing:2 upper:1 inspect:3 datasets:6 arc:1 ramesh:1 howard:1 riesenhuber:1 november:1 january:1 maxk:1 hinton:1 looking:1 frame:1 dc:1 locate:1 discovered:1 introduced:1 complement:1 pair:2 extensive:1 connection:1 coherent:1 learned:18 boser:1 able:7 suggested:1 spotting:1 pattern:9 appeared:1 built:2 max:4 green:1 video:4 including:3 belief:2 power:1 event:1 critical:1 natural:3 pictured:2 representing:1 zhu:1 technology:1 picture:1 axis:1 extract:2 auto:2 text:2 prior:3 geometric:2 discovery:1 tangent:1 harvested:2 expect:3 interesting:1 limitation:1 analogy:1 versus:2 lv:1 validation:2 downloaded:2 sufficient:1 principle:1 editor:1 bank:2 tiny:1 share:1 heavy:1 translation:5 surprisingly:1 copy:2 bias:1 pulled:1 wide:3 neighbor:1 face:35 template:1 saul:1 absolute:1 sparse:7 benefit:1 van:1 curve:12 overcome:1 author:1 made:1 commonly:5 collection:2 adaptive:1 far:3 ranganath:1 emphasize:1 ignore:1 silhouette:1 keep:1 dealing:1 global:2 uncentered:1 anchor:1 hurri:1 fergus:1 spectrum:2 grayscale:1 search:2 sk:2 why:1 reviewed:1 tailed:1 table:1 learn:11 nature:1 robust:1 ca:1 ignoring:1 ngiam:1 excellent:1 complex:43 motivation:1 whole:1 allowed:1 quadrature:1 body:1 referred:1 join:1 depicts:1 grosse:1 precision:5 position:3 explicit:1 candidate:1 lie:2 learns:2 removing:1 specific:2 showing:1 svm:4 evidence:1 normalizing:1 a3:2 incorporating:1 quantization:1 glorot:1 adding:1 importance:1 dissimilarity:3 occurring:6 disregarded:1 boureau:1 chen:2 flavor:1 locality:2 simply:1 likely:4 appearance:1 desire:1 stitch:1 g2:1 collectively:1 springer:1 extracted:1 succeed:1 goal:1 shared:1 labelme:1 hard:1 youtube:7 included:1 specifically:5 typical:1 except:1 averaging:1 total:1 invariance:19 experimental:1 matcher:1 rarely:2 select:1 berg:1 mark:2 people:1 incorporate:1 dept:1 |
3,863 | 4,498 | Approximate Message Passing with Consistent
Parameter Estimation and Applications to Sparse
Learning
Ulugbek S. Kamilov
EPFL
[email protected]
Sundeep Rangan
Polytechnic Institute of New York University
[email protected]
Alyson K. Fletcher
University of California, Santa Cruz
[email protected]
Michael Unser
EPFL
[email protected]
Abstract
We consider the estimation of an i.i.d. vector x ? Rn from measurements y ? Rm
obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. We present a method, called adaptive generalized approximate message passing (Adaptive GAMP), that enables joint learning of the statistics of the prior
and measurement channel along with estimation of the unknown vector x. Our
method can be applied to a large class of learning problems including the learning of sparse priors in compressed sensing or identification of linear-nonlinear
cascade models in dynamical systems and neural spiking processes. We prove
that for large i.i.d. Gaussian transform matrices the asymptotic componentwise
behavior of the adaptive GAMP algorithm is predicted by a simple set of scalar
state evolution equations. This analysis shows that the adaptive GAMP method
can yield asymptotically consistent parameter estimates, which implies that the
algorithm achieves a reconstruction quality equivalent to the oracle algorithm that
knows the correct parameter values. The adaptive GAMP methodology thus provides a systematic, general and computationally efficient method applicable to a
large range of complex linear-nonlinear models with provable guarantees.
1
Introduction
Consider the estimation of a random vector x ? Rn from a measurement vector y ? Rm . As
illustrated in Figure 1, the vector x, which is assumed to have i.i.d. components xj ? PX , is passed
through a known linear transform that outputs z = Ax ? Rm . The components of y ? Rm are
generated by a componentwise transfer function PY |Z . This paper addresses the cases where the
distributions PX and PY |Z have some parametric uncertainty that must be learned so as to properly
estimate x.
This joint estimation and learning problem with linear transforms and componentwise nonlinearities
arises in a range of applications, including empirical Bayesian approaches to inverse problems in signal processing, linear regression and classification [1, 2], and, more recently, Bayesian compressed
1
Signal Prior
Unknown
i.i.d. signal
Mixing Matrix
Unknown
Linear
Measurements
Componentwise
Output Channel
Available
Measurements
Figure 1: Measurement model considered in this work. The vector x ? Rn with an i.i.d. prior
PX (x|?x ) passes through the linear transform A ? Rm?n followed by a componentwise nonlinear
channel PY |Z (y|z, ?z ) to result in y ? Rm . The prior PX and the nonlinear channel PY |Z depend
on the unknown parameters ?x and ?z , respectively. We propose adaptive GAMP to jointly estimate
x and (?x , ?z ) given the measurements y.
sensing for estimation of sparse vectors x from underdetermined measurements [3?5]. Also, since
the parameters in the output transfer function PY |Z can model unknown nonlinearities, this problem
formulation can be applied to the identification of linear-nonlinear cascade models of dynamical
systems, in particular for neural spike responses [6?8].
In recent years, there has been considerable interest in so-called approximate message passing
(AMP) methods for this estimation problem. The AMP techniques use Gaussian and quadratic
approximations of loopy belief propagation (LBP) to provide estimation methods that are computationally efficient, general and analytically tractable. However, the AMP methods generally require
that the distributions PX and PY |Z are known perfectly. When the parameters ?x and ?z are unknown, various extensions have been proposed including combining AMP methods with Expectation Maximization (EM) estimation [9?12] and hybrid graphical models approaches [13]. In this
work, we present a novel method for joint parameter and vector estimation called adaptive generalized AMP (adaptive GAMP), that extends the GAMP method of [14]. We present two major
theoretical results related to adaptive GAMP: We first show that, similar to the analysis of the standard GAMP algorithm, the componentwise asymptotic behavior of adaptive GAMP can be exactly
described by a simple scalar state evolution (SE) equations [14?18]. An important consequence of
this result is a theoretical justification to the EM-GAMP algorithm in [9?12] which is a special
case of adaptive GAMP with a particular choice of adaptation functions. Our second result demonstrates the asymptotic consistency of adaptive GAMP when adaptation functions correspond to the
maximum-likelihood (ML) parameter estimation. We show that when the ML estimation is computed exactly, the estimated parameters converge to the true values and the performance of adaptive
GAMP asymptotically coincides with the performance of the oracle GAMP algorithms that knows
correct parameter values. Adaptive GAMP thus provides a computationally-efficient method for
solving a wide variety of joint estimation and learning problems with a simple, exact performance
characterization and provable conditions for asymptotic consistency.
All proofs and some technical details that have been omitted for space appear in the full paper [19]
that also provides more background and simulations.
2
Adaptive GAMP
Approximate message passing (AMP) refers to a class of algorithms based on Gaussian approximations of loopy belief propagation (LBP) for the estimation of the vectors x and z according
to the model described in Section 1. These methods originated from CDMA multiuser detection
problems in [15, 20, 21]; more recently, they have attracted considerable attention in compressed
sensing [17, 18, 22]. The Gaussian approximations used in AMP are closely related to standard expectation propagation techniques [23, 24], but with additional simplifications that exploit the linear
coupling between the variables x and z. The key benefits of AMP methods are their computational performance, their large domain of application, and, for certain large random A, their exact
asymptotic performance characterizations with testable conditions for optimality [15?18]. This paper considers an adaptive version of the so-called generalized AMP (GAMP) method of [14] that
extends the algorithm in [22] to arbitrary output distributions PY |Z .
The original GAMP algorithm of [14] requires that the distributions PX and PY |Z are known. We
propose an adaptive GAMP, shown in Algorithm 1, to allow for simultaneous estimation of the
distributions PX and PY |Z along with the estimation of x and z. The algorithm assumes that distributions PX and PY |Z have some parametric forms
PX (x|?x ),
PY |Z (y|z, ?z ),
2
(1)
for parameters ?x ? ?x and ?z ? ?z and for parameter sets ?x and ?z . Algorithm 1 produces a
bt and ?
bt , The precise
bt and b
sequence of estimates x
zt for x and z along with parameter estimates ?
x
z
value of these estimates depends on several factors in the algorithm including the termination criteria
and the choice of what we will call estimation functions Gtx , Gtz and Gts , and adaptation functions
Hxt and Hzt .
Algorithm 1 Adaptive GAMP
Require: Matrix A, estimation functions Gtx , Gts and Gtz and adaptation functions Hxt and Hzt .
b0 , ?x0 ,
1: Initialize t ? 0, s?1 ? 0 and some values for x
2: repeat
3:
{Output node update}
4:
?pt ? kAk2F ?xt /m
5:
pt ? Ab
xt ? st?1 ?pt
bt ? H t (pt , y, ? t )
6:
?
z
z
p
bt ) for all i = 1, . . . , m
7:
zbit ? Gtz (pti , yi , ?pt , ?
z
bt ) for all i = 1, . . . , m
8:
sti ? Gts (pti , yi , ?pt , ?
z
P
bt )/?pt
9:
?st ? ?(1/m) i ?Gts (pti , yi , ?pt , ?
z
i
10:
11:
12:
13:
14:
15:
{Input node update}
1/?rt ? kAk2F ?st /n
rt = xt + ?rt AT st
bt ? H t (rt , ? t )
?
r
x
x
t
t bt
t
x
bt+1
j ? Gx (rj , ?r , ?x ) for all j = 1, . . . , n
P
bt )/?rj
16:
?xt+1 ? (?rt /n) j ?Gtx (rjt , ?rt , ?
x
17: until Terminated
The choice of the estimation and adaptation functions allows for considerable flexibility in the algorithm. For example, it is shown in [14] that Gtx , Gtz , and Gts can be selected such that the GAMP
algorithm implements Gaussian approximations of either max-sum LBP or sum-product LBP that
approximate the maximum-a-posteriori (MAP) or minimum-mean-squared-error (MMSE) estimates
of x given y, respectively. The adaptation functions can also be selected for a number of different
parameter-estimation strategies. Because of space limitation, we present only the estimation functions for the sum-product GAMP algorithm from [14] along with an ML-type adaptation. Some of
the analysis below, however, applies more generally.
As described in [14], the sum-product estimation can be implemented with the functions
bx ) := E[X|R = r, ?r , ?
bx ],
Gt (r, ?r , ?
x
t
bz )
Gz (p, y, ?p , ?
bz ],
:= E[Z|P = p, Y = y, ?p , ?
bz ) := 1 Gt (p, y, ?p , ?
bz ) ? p ,
Gts (p, y, ?p , ?
z
?p
where the expectations are with respect to the scalar random variables
bx ),
R = X + Vx , Vx ? N (0, ?r ), X ? PX (?|?
(2a)
(2b)
(2c)
(3a)
b
Z = P + Vz , Vz ? N (0, ?p ), Y ? PY |Z (?|Z, ?z ).
(3b)
The estimation functions (2) correspond to scalar estimates of random variables in additive white
Gaussian noise (AWGN). A key result of [14] is that, when the parameters are set to the true values
bx , ?
bz ) = (?x , ?z )), the outputs x
bt and b
(i.e. (?
zt can be interpreted as sum products estimates of
the conditional expectations E(x|y) and E(z|y). The algorithm thus reduces the vector-valued
estimation problem to a computationally simple sequence of scalar AWGN estimation problems
along with linear transforms.
The estimation functions Hxt and Hzt in Algorithm 1 produce the estimates for the parameters ?x
and ?z . In the special case when Hxt and Hzt produce fixed outputs
t
Hzt (pt , yt , ?pt ) = ?z ,
3
t
Hxt (rt , ?rt ) = ?x ,
t
t
for pre-computed values of ?z and ?x , the adaptive GAMP algorithm reduces to the standard (nonadaptive) GAMP algorithm of [14]. The non-adaptive GAMP algorithm can be used when the
parameters ?x and ?z are known.
When the parameters ?x and ?z are unknown, it has been proposed in [9?12] that they can be
estimated via an EM method that exploits that fact that GAMP provides estimates of the posterior
distributions of x and z given the current parameter estimates. As described in the full paper [19],
this EM-GAMP method corresponds to a special case of the Adaptive GAMP method for a particular
choice of the adaptation functions Hxt and Hzt .
However, in this work, we consider an alternate parameter estimation method based on ML adaptation. The ML adaptation uses the following fact that we will rigorously justify below: For certain
large random A, at any iteration t, the components of the vectors rt and the joint vectors (pt , yt )
will be distributed as
R
Z
= ?r X + Vx , Vx ? N (0, ?r ), X ? PX (?|??x ),
= P + Vz , (Z, P ) ? N (0, Kp ), Y ? PY |Z (?|Z, ??z ),
(4a)
(4b)
where ??x and ??z are the ?true? parameters and the scalars ?r and ?r and the covariance matrix Kp
are some parameters that depend on the estimation and adaptation functions used in the previous
iterations. Remarkably, the distributions of the components of rt and (pt , yt ) will follow (4) even
if the estimation functions in the iterations prior to t used the incorrect parameter values. The
adaptive GAMP algorithm can thus attempt to estimate the parameters via a maximum likelihood
(ML) estimation:
?
?
? 1 n?1
?
X
max t
Hxt (rt , ?rt ) := arg max
?x (rjt , ?x , ?r , ?r ) ,
(5a)
?
?x ??x (?r ,?r )?Sx (?r ) ? n j=0
( m?1
)
X
1
Hzt (pt , y, ?pt ) := arg max max t
?z (pti , yi , Kp ) ,
(5b)
m i=0
?z ??z Kp ?Sz (?p )
where Sx and Sz are sets of possible values for the parameters ?r , ?r and Kp , ?x and ?z are the
log-likelihoods
?x (r, ?x , ?r , ?r ) = log pR (r|?x , ?r , ?r ),
?z (p, y, ?z , Kp ) = log pP,Y (p, y|?z , Kp )
(6a)
(6b)
and pR and pP,Y are the probability density functions corresponding to the distributions in (4).
3
3.1
Convergence and Asymptotic Consistency with Gaussian Transforms
General State Evolution Analysis
Before proving the asymptotic consistency of the adaptive GAMP method with ML adaptation, we
first prove a more general convergence result. Among other consequences, the result will justify
the distribution model (4) assumed by the ML adaptation. Similar to the SE analyses in [14, 18]
we consider the asymptotic behavior of the adaptive GAMP algorithm with large i.i.d. Gaussian
matrices. The assumptions are summarized as follows. Details can be found in the full paper [19,
Assumption 2].
Assumption 1 Consider the adaptive GAMP algorithm running on a sequence of problems indexed
by the dimension n, satisfying the following:
(a) For each n, the matrix A ? Rm?n has i.i.d. components with Aij ? N (0, 1/m) and the
dimension m = m(n) is a deterministic function of n satisfying n/m ? ? for some ? > 0
as n ? ?.
b0 are deterministic sequences whose components
(b) The input vectors x and initial condition x
converge empirically with bounded moments of order s = 2k ? 2 as
PL(s)
b 0 ),
b0 ) = (X, X
lim (x, x
n??
4
(7)
b 0 ) for k = 2. See [19] for a precise statement of this type of
to some random vector (X, X
convergence.
(c) The output vectors z and y ? Rm are generated by
z = Ax,
y = h(z, w),
(8)
for some scalar function h(z, w) where the disturbance vector w is deterministic, but empirically converges as
PL(s)
lim w = W,
(9)
n??
with s = 2k?2, k = 2 and W is some random variable. We let PY |Z denote the conditional
distribution of the random variable Y = h(Z, W ).
(d) Suitable continuity assumptions on the estimation functions Gtx , Gtz and Gts and adaptation
functions Hxt and Hzt ? see [19] for details.
Now define the sets of vectors
?xt := {(xj , rjt , x
bt+1
j ), j = 1, . . . , n},
?zt := {(zi , zbit , yi , pti ), i = 1, . . . , m}.
(10)
The first vector set, ?xt , represents the components of the the ?true,? but unknown, input vector x, its
bt as well as rt . The second vector, ?zt , contains the components of the
adaptive GAMP estimate x
?true,? but unknown, output vector z, its GAMP estimate b
zt , as well as pt and the observed input y.
The sets ?xt and ?zt are implicitly functions of the dimension n. Our main result, Theorem 1 below,
characterizes the asymptotic joint distribution of the components of these two sets as n ? ?.
Specifically, we will show that the empirical distribution of the components of ?xt and ?zt converge
to a random vectors of the form
t
t
b t+1 ),
? := (X, Rt , X
? := (Z, Zbt , Y, P t ),
(11)
x
z
b t+1 are given by
where X is the random variable in the initial condition (7). Rt and X
Rt = ?rt X + V t ,
V t ? N (0, ?rt ),
b t+1 = Gtx (Rt , ? tr , ?tx )
X
(12)
t
for some deterministic constants ?rt , ?rt , ? tr and ?x that will be defined momentarily. Similarly,
(Z, P t ) ? N (0, Ktp ), and
t
Zbt = Gtz (P t , Y, ? tp , ?z ),
Y ? PY |Z (?|Z),
(13)
t
where W is the random variable in (9) and Ktp and ?z are also deterministic constants. The deterministic constants above can be computed iteratively with the following state evolution (SE) equations
shown in Algorithm 2.
Theorem 1 Consider the random vectors ?xt and ?zt generated by the outputs of GAMP under Ast
t
sumption 1. Let ?x and ?z be the random vectors in (11) with the parameters determined by the SE
equations in Algorithm 2. Then, for any fixed t, almost surely, the components of ?xt and ?zt converge
empirically with bounded moments of order k = 2 as
PL(k)
t
lim ?xt = ?x ,
lim ?zt
n??
t
n??
PL(k)
t
= ?z .
(17)
t
where ?x and ?z are given in (11). In addition, for any t, the limits
t
lim ?tx = ?x ,
n
t
lim ?tz = ?z ,
lim ?rt = ? tr ,
n
n
lim ?pt = ? tp ,
n
(18)
also hold almost surely.
Similar to several other analyses of AMP algorithms such as [14?18], the theorem provides a scalar
equivalent model for the componentwise behavior of the adaptive GAMP method. That is, asymptotically the components of the sets ?xt and ?zt in (10) are distributed identically to simple scalar
random variables. The parameters in these random variables can be computed via the SE equations
5
Algorithm 2 Adaptive GAMP State Evolution
Given the distributions in Assumption 1, compute the sequence of parameters as follows:
? Initialization: Set t = 0 with
b 0 ),
K0x = cov(X, X
? 0x = ?x0 ,
(14)
b 0 ) in Assumption 1(b) and ?x0 is
where the expectation is over the random variables (X, X
the initial value in the GAMP algorithm.
t
? Output node update: Compute the variables associated with ?z :
? tp
? tr
?rt
= ?? tx ,
t
Ktp = ?Ktx ,
?z = Hzt (P t , ? tp ),
(15a)
i
h
? t t
t
t
Gs (P , Y, ? tp , ?z ) , ?rt = (? tr )2 E Gts (P t , Y, ? tp , ?z ) , (15b)
= ?E?1
?p
?
t
t
t b
t
.
(15c)
= ? rE
Gs (P , h(z, W ), ? p , ?z )
?z
z=Z
where the expectations are over the random variables (P t , Y, W ).
t
? Input node update: Compute the variables associated with ?x :
t
?x
=
? t+1
x
=
Hxt (Rt , ? tr ),
? t t t t
t
? rE
G (R , ? r , ?x ) ,
?r x
(16a)
b t+1 ),
Kt+1
x = cov(X, X
(16b)
b t+1 ).
where the expectation is over the random variable (X, X
(14), (15) and (16), which can be evaluated with one or two-dimensional integrals. From this scalar
equivalent model, one can compute a large class of componentwise performance metrics such as
mean-squared error (MSE) or detection error rates. Thus, the SE analysis shows that for, essentially
arbitrary estimation and adaptation functions, and distributions on the true input and disturbance, we
can exactly evaluate the asymptotic behavior of the adaptive GAMP algorithm. In addition, when
the parameter values ?x and ?z are fixed, the SE equations in Algorithm 2 reduce to SE equations
for the standard (non-adaptive) GAMP algorithm described in [14].
3.2
Asymptotic Consistency with ML Adaptation
The general result, Theorem 1, can be applied to the adaptive GAMP algorithm with arbitrary estimation and adaptation function. In particular, the result can be used to rigorously justify the SE
analysis of the EM-GAMP presented in [11, 12]. Here, we use the result to prove the asymptotic
parameter consistency of Adaptive GAMP with ML adaptation. The key point is to realize that
the distributions (12) and (13) exactly match the distributions (4) assumed by the ML adaptation
functions (5). Thus, the ML adaptation should work provided that the maximizations in (5) yield
the correct parameter estimates. This condition is essentially an identifiability requirement that we
make precise with the following definitions.
Definition 1 Consider a family of distributions, {PX (x|?x ), ?x ? ?x }, a set Sx of parameters
(?r , ?r ) of a Gaussian channel and function ?x (r, ?x , ?r , ?r ). We say that PX (x|?x ) is identifiable
with Gaussian outputs with parameter set Sx and function ?x if:
(a) The sets Sx and ?x are compact.
(b) For any ?true? parameters ??x ? ?x , and (?r , ?r ) ? Sx , the maximization
bx = arg max
?
?x ??x
max
(?r ,?r )?Sx
E [?x (?r? X + V, ?x , ?r , ?r )|??x , ?r? , ?r? ] ,
(19)
bx = ?? . The expectation in (19) is with
is well-defined, unique and returns the true value, ?
x
?
?
respect to X ? PX (?|?x ) and V ? N (0, ?r ).
6
(c) Suitable continuity assumptions ? see [19] for details.
Definition 2 Consider a family of conditional distributions, {PY |Z (y|z, ?z ), ?z ? ?z } generated
by the mapping Y = h(Z, W, ?z ) where W ? PW is some random variable and h(z, w, ?z ) is
a scalar function. Let Sz be a set of covariance matrices Kp and let ?z (y, p, ?z , Kp ) be some
function. We say that conditional distribution family PY |Z (?|?, ?z ) is identifiable with Gaussian
inputs with covariance set Sz and function ?z if:
(a) The parameter sets Sz and ?z are compact.
(b) For any ?true? parameter ??z ? ?z and true covariance K?p , the maximization
bz = arg max max E ?z (Y, P, ?z , Kp )|?? , K? ,
?
z
p
?z ??z
Kp ?Sz
(20)
bz = ?? , The expectation in (20) is with
is well-defined, unique and returns the true value, ?
z
?
respect to Y |Z ? PY |Z (y|z, ?z ) and (Z, P ) ? N (0, K?p ).
(c) Suitable continuity assumptions ? see [19] for details.
Definitions 1 and 2 essentially require that the parameters ?x and ?z can be identified through a
maximization. The functions ?x and ?z can be the log likelihood functions (6a) and (6b), although
we permit other functions as well. See [19] for further discussion of the likelihood functions as well
as the choice of the parameter sets Sx and Sz .
Theorem 2 Let PX (?|?x ) and PY |Z (?|?, ?z ) be families of input and output distributions that are
identifiable in the sense of Definitions 1 and 2. Consider the outputs of the adaptive GAMP algorithm using the ML adaptation functions (5) using the functions ?x and ?z and parameter sets in
Definitions 1 and 2. In addition, suppose Assumption 1(a) to (c) hold where the distribution of X is
given by PX (?|??x ) for some ?true? parameter ??x ? ?x and the conditional distribution of Y given
Z is given by PY |Z (y|z, ??z ) for some ?true? parameter ??z ? ?z . Then, under suitable continuity
conditions (see [19] for details), for any fixed t,
(a) The components of ?xt and ?zt in (10) converge empirically with bounded moments of order
k = 2 as in (17) and the limits (18) hold almost surely.
bt = ?t = ?? almost surely.
(b) If (?rt , ?rt ) ? Sx (?rt ) for some t, then limn?? ?
x
x
x
t
bt = ? = ?? almost surely.
(c) If Ktp ? Sz (?pt ) for some t, then limn?? ?
z
z
z
The theorem shows, remarkably, that for a very large class of the parameterized distributions, the
adaptive GAMP algorithm with ML adaptation is able to asymptotically estimate the correct parameters. Also, once the consistency limits in (b) and (c) hold, the SE equations in Algorithm 2 reduce
to the SE equations for the non-adaptive GAMP method running with the true parameters. Thus,
we conclude there is asymptotically no performance loss between the adaptive GAMP algorithm
and a corresponding oracle GAMP algorithm that knows the correct parameters in the sense that the
empirical distributions of the algorithm outputs are described by the same SE equations.
4
Numerical Example: Estimation of a Gauss-Bernoulli input
Recent results suggest that there is considerable value in learning of priors PX in the context of
compressed sensing [25], which considers the estimation of sparse vectors x from underdetermined
measurements (m < n) . It is known that estimators such as LASSO offer certain optimal min-max
performance over a large class of sparse distributions [26]. However, for many particular distributions, there is a potentially large performance gap between LASSO and MMSE estimator with the
correct prior. This gap was the main motivation for [9, 10] which showed large gains of the EMGAMP method due to its ability to learn the prior. Here, we present a simple simulation to illustrate
the performance gain of adaptive GAMP and its asymptotic consistency. Specifically, Fig. 2 compares the performance of adaptive GAMP for estimation of a sparse Gauss-Bernoulli signal x ? Rn
from m noisy measurements
y = Ax + w,
7
?7
MSE(dB)
(dB)
MSE
?9
?15
MSE
MSE(dB)
(dB)
?8
?10
State Evolution
LASSO
Oracle GAMP
Adaptive GAMP
?10
?11
?20
?25
?12
?30
?13
?14
0.5
1
1.5
Measurement ratio
ratio ((m/n))
Measurement
(a)
?35 ?3
10
2
?2
10
Noise Variance
variance ((m2))
Noise
(b)
?1
10
Figure 2: Reconstruction of a Gauss-Bernoulli signal from noisy measurements. The average reconstruction MSE is plotted against (a) measurement ratio m/n and (b) AWGN variance ? 2 . The plots
illustrate that adaptive GAMP yields considerable improvement over `1 -based LASSO estimator.
Moreover, it exactly matches the performance of oracle GAMP that knows the prior parameters.
where the additive noise w is random with i.i.d. entries wi ? N (0, ? 2 ). The signal of length
n = 400 has 20% nonzero components drawn from the Gaussian distribution of variance 5. Adaptive GAMP uses EM iterations, which are used to approximate ML parameter estimation, to jointly
recover the unknown signal x and the true parameters ?x = (? = 0.2, ?x2 = 5). The performance of
adaptive GAMP is compared to that of LASSO with MSE optimal regularization parameter, and oracle GAMP that knows the parameters of the prior exactly. For generating the graphs, we performed
1000 random trials by forming the measurement matrix A from i.i.d. zero-mean Gaussian random
variables of variance 1/m. In Figure 2(a), we keep the variance of the noise fixed to ? 2 = 0.1 and
plot the average MSE of the reconstruction against the measurement ratio m/n. In Figure 2(b), we
keep the measurement ratio fixed to m/n = 0.75 and plot the average MSE of the reconstruction
against the noise variance ? 2 . For completeness, we also provide the asymptotic MSE values computed via SE recursion. The results illustrate that GAMP significantly outperforms LASSO over the
whole range of m/n and ? 2 . Moreover, the results corroborate the consistency of adaptive GAMP
which achieves nearly identical quality of reconstruction with oracle GAMP. The performance results here and in [19] indicate that adaptive GAMP can be an effective method for estimation when
the parameters of the problem are difficult to characterize and must be estimated from data.
5
Conclusions and Future Work
We have presented an adaptive GAMP method for the estimation of i.i.d. vectors x observed through
a known linear transforms followed by an arbitrary, componentwise random transform. The procedure, which is a generalization of EM-GAMP methodology of [9, 10], estimates both the vector x
as well as parameters in the source and componentwise output transform. In the case of large i.i.d.
Gaussian transforms with ML parameter estimation, it is shown that the adaptive GAMP method is
provably asymptotically consistent in that the parameter estimates converge to the true values. This
convergence result holds over a large class of models with essentially arbitrarily complex parameterizations. Moreover, the algorithm is computationally efficient since it reduces the vector-valued
estimation problem to a sequence of scalar estimation problems in Gaussian noise. We believe that
this method is applicable to a large class of linear-nonlinear models with provable guarantees and
that it can have applications in a wide range of problems. We have mentioned the use of the method
for learning sparse priors in compressed sensing. Future work will include possible extensions to
non-Gaussian matrices.
References
[1] M. Tipping, ?Sparse Bayesian learning and the relevance vector machine,? J. Machine Learning Research,
vol. 1, pp. 211?244, Sep. 2001.
[2] M. West, ?Bayesian factor regressionm models in the ?large p, small n? paradigm,? Bayesian Statistics,
vol. 7, 2003.
8
[3] D. Wipf and B. Rao, ?Sparse Bayesian learning for basis selection,? IEEE Trans. Signal Process., vol. 52,
no. 8, pp. 2153?2164, Aug. 2004.
[4] S. Ji, Y. Xue, and L. Carin, ?Bayesian compressive sensing,? IEEE Trans. Signal Process., vol. 56, pp.
2346?2356, Jun. 2008.
[5] V. Cevher, ?Learning with compressible priors,? in Proc. NIPS, Vancouver, BC, Dec. 2009.
[6] S. Billings and S. Fakhouri, ?Identification of systems containing linear dynamic and static nonlinear
elements,? Automatica, vol. 18, no. 1, pp. 15?26, 1982.
[7] I. W. Hunter and M. J. Korenberg, ?The identification of nonlinear biological systems: Wiener and Hammerstein cascade models,? Biological Cybernetics, vol. 55, no. 2?3, pp. 135?144, 1986.
[8] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli, ?Spike-triggered neural characterization,? J.
Vision, vol. 6, no. 4, pp. 484?507, Jul. 2006.
[9] J. P. Vila and P. Schniter, ?Expectation-maximization Bernoulli-Gaussian approximate message passing,?
in Conf. Rec. 45th Asilomar Conf. Signals, Syst. & Comput., Pacific Grove, CA, Nov. 2011, pp. 799?803.
[10] ??, ?Expectation-maximization Gaussian-mixture approximate message passing,? in Proc. Conf. on
Inform. Sci. & Sys., Princeton, NJ, Mar. 2012.
[11] F. Krzakala, M. M?ezard, F. Sausset, Y. Sun, and L. Zdeborov?a, ?Statistical physics-based reconstruction
in compressed sensing,? arXiv:1109.4424, Sep. 2011.
[12] ??, ?Probabilistic reconstruction in compressed sensing: Algorithms, phase diagrams, and threshold
achieving matrices,? arXiv:1206.3953, Jun. 2012.
[13] S. Rangan, A. K. Fletcher, V. K. Goyal, and P. Schniter, ?Hybrid generalized approximation message
passing with applications to structured sparsity,? in Proc. IEEE Int. Symp. Inform. Theory, Cambridge,
MA, Jul. 2012, pp. 1241?1245.
[14] S. Rangan, ?Generalized approximate message passing for estimation with random linear mixing,? in
Proc. IEEE Int. Symp. Inform. Theory, Saint Petersburg, Russia, Jul.?Aug. 2011, pp. 2174?2178.
[15] D. Guo and C.-C. Wang, ?Asymptotic mean-square optimality of belief propagation for sparse linear
systems,? in Proc. IEEE Inform. Theory Workshop, Chengdu, China, Oct. 2006, pp. 194?198.
[16] ??, ?Random sparse linear systems observed via arbitrary channels: A decoupling principle,? in Proc.
IEEE Int. Symp. Inform. Theory, Nice, France, Jun. 2007, pp. 946?950.
[17] S. Rangan, ?Estimation with random linear mixing, belief propagation and compressed sensing,? in Proc.
Conf. on Inform. Sci. & Sys., Princeton, NJ, Mar. 2010, pp. 1?6.
[18] M. Bayati and A. Montanari, ?The dynamics of message passing on dense graphs, with applications to
compressed sensing,? IEEE Trans. Inform. Theory, vol. 57, no. 2, pp. 764?785, Feb. 2011.
[19] U. S. Kamilov, S. Rangan, A. K. Fletcher, and M. Unser, ?Approximate message passing with consistent
parameter estimation and applications to sparse learning,? arXiv:1207.3859 [cs.IT], Jul. 2012.
[20] J. Boutros and G. Caire, ?Iterative multiuser joint decoding: Unified framework and asymptotic analysis,?
IEEE Trans. Inform. Theory, vol. 48, no. 7, pp. 1772?1793, Jul. 2002.
[21] T. Tanaka and M. Okada, ?Approximate belief propagation, density evolution, and neurodynamics for
CDMA multiuser detection,? IEEE Trans. Inform. Theory, vol. 51, no. 2, pp. 700?706, Feb. 2005.
[22] D. L. Donoho, A. Maleki, and A. Montanari, ?Message-passing algorithms for compressed sensing,?
Proc. Nat. Acad. Sci., vol. 106, no. 45, pp. 18 914?18 919, Nov. 2009.
[23] T. P. Minka, ?A family of algorithms for approximate Bayesian inference,? Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA, 2001.
[24] M. Seeger, ?Bayesian inference and optimal design for the sparse linear model,? J. Machine Learning
Research, vol. 9, pp. 759?813, Sep. 2008.
[25] E. J. Cand`es and T. Tao, ?Near-optimal signal recovery from random projections: Universal encoding
strategies?? IEEE Trans. Inform. Theory, vol. 52, no. 12, pp. 5406?5425, Dec. 2006.
[26] D. Donoho, I. Johnstone, A. Maleki, and A. Montanari, ?Compressed sensing over `p -balls: Minimax
mean square error,? in Proc. ISIT, St. Petersburg, Russia, Jun. 2011.
9
| 4498 |@word trial:1 version:1 pw:1 termination:1 simulation:2 covariance:4 tr:6 moment:3 initial:3 contains:1 bc:1 amp:10 mmse:2 multiuser:3 outperforms:1 current:1 must:2 attracted:1 realize:1 cruz:1 additive:2 numerical:1 enables:1 plot:3 update:4 selected:2 sys:2 dissertation:1 provides:5 characterization:3 node:4 completeness:1 gx:1 parameterizations:1 compressible:1 along:5 ucsc:1 incorrect:1 prove:3 symp:3 krzakala:1 x0:3 behavior:5 cand:1 provided:1 bounded:3 moreover:3 what:1 interpreted:1 compressive:1 unified:1 petersburg:2 nj:2 guarantee:2 exactly:6 rm:8 demonstrates:1 schwartz:1 appear:1 before:1 limit:3 consequence:2 acad:1 awgn:3 encoding:1 initialization:1 china:1 range:4 unique:2 goyal:1 implement:1 procedure:1 universal:1 empirical:3 cascade:4 significantly:1 projection:1 pre:1 refers:1 suggest:1 selection:1 ast:1 context:1 py:20 equivalent:3 map:1 deterministic:6 yt:3 attention:1 recovery:1 m2:1 estimator:3 proving:1 justification:1 pt:17 suppose:1 exact:2 us:2 element:1 satisfying:2 rec:1 observed:3 wang:1 momentarily:1 sun:1 mentioned:1 rigorously:2 dynamic:2 ezard:1 depend:2 solving:1 basis:1 alyson:1 sep:3 joint:7 various:1 tx:3 effective:1 kp:11 whose:1 valued:2 say:2 compressed:11 ability:1 statistic:2 cov:2 transform:6 jointly:2 noisy:2 sequence:6 triggered:1 reconstruction:8 propose:2 product:4 adaptation:22 combining:1 mixing:3 flexibility:1 convergence:4 requirement:1 produce:3 generating:1 converges:1 coupling:1 illustrate:3 b0:3 aug:2 implemented:1 predicted:1 c:1 implies:1 indicate:1 closely:1 correct:6 vx:4 vila:1 require:3 generalization:1 sumption:1 isit:1 biological:2 underdetermined:2 extension:2 pl:4 hold:5 considered:1 fletcher:3 mapping:1 major:1 achieves:2 omitted:1 estimation:45 proc:9 applicable:2 vz:3 gaussian:18 ax:3 properly:1 improvement:1 bernoulli:4 likelihood:5 seeger:1 hzt:9 sense:2 posteriori:1 inference:2 epfl:4 bt:16 france:1 tao:1 provably:1 arg:4 classification:1 among:1 special:3 initialize:1 once:1 identical:1 represents:1 nearly:1 carin:1 future:2 wipf:1 sundeep:1 phase:1 consisting:1 ab:1 attempt:1 detection:3 interest:1 message:11 mixture:1 korenberg:1 kt:1 grove:1 integral:1 schniter:2 indexed:1 re:2 plotted:1 theoretical:2 cevher:1 rao:1 corroborate:1 tp:6 maximization:7 loopy:2 entry:1 characterize:1 gamp:65 xue:1 st:5 density:2 caire:1 probabilistic:2 systematic:1 physic:1 decoding:1 michael:2 squared:2 containing:1 possibly:1 russia:2 tz:1 conf:4 bx:6 return:2 syst:1 nonlinearities:2 summarized:1 int:3 depends:1 performed:1 characterizes:1 recover:1 identifiability:1 jul:5 square:2 wiener:1 variance:7 yield:3 correspond:2 identification:4 bayesian:9 rjt:3 hunter:1 cybernetics:1 simultaneous:1 inform:10 definition:6 against:3 pp:20 minka:1 proof:1 associated:2 static:1 gain:2 massachusetts:1 lim:8 tipping:1 follow:1 methodology:2 response:1 formulation:1 evaluated:1 mar:2 until:1 nonlinear:9 propagation:6 continuity:4 quality:2 believe:1 true:16 gtx:6 evolution:7 analytically:1 regularization:1 maleki:2 iteratively:1 nonzero:1 illustrated:1 white:1 coincides:1 criterion:1 generalized:5 sausset:1 novel:1 recently:2 spiking:1 empirically:4 ji:1 rust:1 measurement:18 cambridge:2 consistency:9 similarly:1 gt:2 feb:2 ktx:1 posterior:1 recent:2 showed:1 certain:3 kamilov:3 arbitrarily:1 yi:5 minimum:1 additional:1 surely:5 converge:6 paradigm:1 signal:11 full:3 simoncelli:1 rj:2 reduces:3 technical:1 match:2 offer:1 regression:1 essentially:4 expectation:11 metric:1 bz:7 vision:1 iteration:4 arxiv:3 dec:2 lbp:4 background:1 remarkably:2 addition:3 diagram:1 source:1 limn:2 pass:1 db:4 zbt:2 gts:8 call:1 near:1 identically:1 variety:1 srangan:1 xj:2 zi:1 perfectly:1 identified:1 lasso:6 reduce:2 billing:1 passed:1 passing:11 york:1 generally:2 santa:1 se:13 transforms:5 ph:1 estimated:3 vol:13 key:3 threshold:1 achieving:1 drawn:1 gtz:6 asymptotically:6 nonadaptive:1 graph:2 year:1 sum:5 sti:1 inverse:1 parameterized:1 uncertainty:1 extends:2 almost:5 family:5 followed:3 simplification:1 quadratic:1 oracle:7 g:2 identifiable:3 rangan:5 x2:1 optimality:2 min:1 px:17 pacific:1 structured:1 according:1 alternate:1 ball:1 em:7 pti:5 wi:1 pr:2 asilomar:1 computationally:5 equation:10 know:5 tractable:1 available:1 permit:1 polytechnic:1 zbit:2 original:1 assumes:1 running:2 include:1 saint:1 graphical:1 cdma:2 exploit:2 testable:1 spike:2 parametric:2 strategy:2 rt:28 zdeborov:1 sci:3 kak2f:2 considers:2 provable:3 length:1 ratio:5 difficult:1 statement:1 potentially:1 design:1 zt:12 unknown:10 precise:3 rn:4 arbitrary:5 componentwise:11 california:1 learned:1 tanaka:1 nip:1 trans:6 address:1 able:1 dynamical:2 below:3 sparsity:1 including:4 max:10 belief:5 suitable:4 hybrid:2 disturbance:2 recursion:1 minimax:1 technology:1 gz:1 jun:4 prior:13 nice:1 vancouver:1 asymptotic:16 loss:1 limitation:1 bayati:1 consistent:4 principle:1 ktp:4 repeat:1 aij:1 allow:1 institute:2 wide:2 johnstone:1 sparse:13 benefit:1 distributed:2 dimension:3 pillow:1 adaptive:46 approximate:12 compact:2 nov:2 implicitly:1 keep:2 sz:8 ml:16 hammerstein:1 automatica:1 assumed:3 conclude:1 iterative:1 neurodynamics:1 channel:7 transfer:2 learn:1 ca:1 decoupling:1 okada:1 mse:10 poly:1 complex:2 domain:1 main:2 montanari:3 dense:1 terminated:1 motivation:1 noise:7 whole:1 fig:1 west:1 originated:1 comput:1 theorem:6 xt:13 sensing:12 unser:3 workshop:1 hxt:9 nat:1 sx:9 gap:2 forming:1 scalar:12 applies:1 ch:2 corresponds:1 ma:2 oct:1 conditional:5 donoho:2 soe:1 considerable:5 specifically:2 determined:1 justify:3 called:4 gauss:3 e:1 guo:1 arises:1 relevance:1 evaluate:1 princeton:2 |
3,864 | 4,499 | Structured Learning of Gaussian Graphical Models
Karthik Mohan?, Michael Jae-Yoon Chung?, Seungyeop Han?,
Daniela Witten?, Su-In Lee?, Maryam Fazel?
Abstract
We consider estimation of multiple high-dimensional Gaussian graphical models corresponding to a single set of nodes under several distinct conditions. We
assume that most aspects of the networks are shared, but that there are some structured differences between them. Specifically, the network differences are generated from node perturbations: a few nodes are perturbed across networks, and
most or all edges stemming from such nodes differ between networks. This corresponds to a simple model for the mechanism underlying many cancers, in which
the gene regulatory network is disrupted due to the aberrant activity of a few specific genes. We propose to solve this problem using the perturbed-node joint
graphical lasso, a convex optimization problem that is based upon the use of a
row-column overlap norm penalty. We then solve the convex problem using an
alternating directions method of multipliers algorithm. Our proposal is illustrated
on synthetic data and on an application to brain cancer gene expression data.
1
Introduction
Probabilistic graphical models are widely used in a variety of applications, from computer vision
to natural language processing to computational biology. As this modeling framework is used in
increasingly complex domains, the problem of selecting from among the exponentially large space
of possible network structures is of paramount importance. This problem is especially acute in the
high-dimensional setting, in which the number of variables or nodes in the graphical model is much
larger than the number of observations that are available to estimate it.
As a motivating example, suppose that we have access to gene expression measurements for n1 lung
cancer patients and n2 brain cancer patients, and that we would like to estimate the gene regulatory
networks underlying these two types of cancer. We can consider estimating a single network on the
basis of all n1 +n2 patients. However, this approach is unlikely to be successful, due to fundamental
differences between the true lung cancer and brain cancer gene regulatory networks that stem from
tissue specificity of gene expression as well as differing etiology of the two diseases. As an alternative, we could simply estimate a gene regulatory network using the n1 lung cancer patients and a
separate gene regulatory network using the n2 brain cancer patients. However, this approach fails to
exploit the fact that the two underlying gene regulatory networks likely have substantial commonality, such as tumor-specific pathways. In order to effectively make use of the available data, we need
a principled approach for jointly estimating the lung cancer and brain cancer networks in such a way
that the two network estimates are encouraged to be quite similar to each other, while allowing for
certain structured differences. In fact, these differences themselves may be of scientific interest.
In this paper, we propose a general framework for jointly learning the structure of K networks, under
the assumption that the networks are similar overall, but may have certain structured differences.
?
Electrical Engineering, Univ. of Washington. {karna,mfazel}@uw.edu
Computer Science and Engineering, Univ. of Washington. {mjyc,syhan}@cs.washington.edu
?
Biostatistics, Univ. of Washington. [email protected]
?
Computer Science and Engineering, and Genome Sciences, Univ. of Washington. [email protected]
?
1
Specifically, we assume that the network differences result from node perturbation ? that is, certain
nodes are perturbed across the conditions, and so all or most of the edges associated with those
nodes differ across the K networks. We detect such differences through the use of a row-column
overlap norm penalty. Figure 1 illustrates a toy example in which a pair of networks are identical to
each other, except for a single perturbed node (X2 ) that will be detected using our proposal.
The problem of estimating multiple networks that differ due to node perturbations arises in a number
of applications. For instance, the gene regulatory networks in cancer patients and in normal individuals are likely to be similar to each other, with specific node perturbations that arise from a small
set of genes with somatic (cancer-specific) mutations. Another example arises in the analysis of the
conditional independence relationships among p stocks at two distinct points in time. We might be
interested in detecting stocks that have differential connectivity with all other edges across the two
time points, as these likely correspond to companies that have undergone significant changes. Still
another example can be found in the field of neuroscience, where we are interested in learning how
the connectivity of neurons in the human brain changes over time.
Figure 1: An example of two networks that differ due to node perturbation of X2 . (a) Network 1
and its adjacency matrix. (b) Network 2 and its adjacency matrix. (c) Left: Edges that differ between
the two networks. Right: Shaded cells indicate edges that differ between Networks 1 and 2.
Our proposal for estimating multiple networks in the presence of node perturbation can be formulated as a convex optimization problem, which we solve using an efficient alternating directions
method of multipliers (ADMM) algorithm that significantly outperforms general-purpose optimization tools. We test our method on synthetic data generated from known graphical models, and on
one real-world task that involves inferring gene regulatory networks from experimental data.
The rest of this paper is organized as follows. In Section 2, we present recent work in the estimation
of Gaussian graphical models (GGMs). In Section 3, we present our proposal for structured learning
of multiple GGMs using the row-column overlap norm penalty. In Section 4, we present an ADMM
algorithm that solves the proposed convex optimization problem. Applications to synthetic and real
data are in Section 5, and the discussion is in Section 6.
2
2.1
Background
The graphical lasso
Suppose that we wish to estimate a GGM on the basis of n observations, X1 , . . . , Xn ? Rp , which
are independent and identically distributed N (0, ?). It is well known that this amounts to learning
the sparsity structure of ??1 [1, 2]. When n > p, one can estimate ??1 by maximum likelihood, but
when p > n this is not possible because the empirical covariance matrix is singular. Consequently,
a number of authors [3, 4, 5, 6, 7, 8, 9] have considered maximizing the penalized log likelihood
{log det ? ? trace(S?) ? ????1 } ,
maximize
p
(1)
??S++
where S is the empirical covariance matrix based on the n observations, ? is a positive tuning
p
parameter, S++
denotes the set of positive definite matrices of size p, and ???1 is the entrywise ?1
?
norm. The ? that solves (1) serves as an estimate of ??1 . This estimate will be positive definite for
any ? > 0, and sparse when ? is sufficiently large, due to the ?1 penalty [10] in (1). We refer to (1)
as the graphical lasso formulation. This formulation is convex, and efficient algorithms for solving
it are available [6, 4, 5, 7, 11].
2
2.2
The fused graphical lasso
In recent literature, convex formulations have been proposed for extending the graphical lasso (1) to
the setting in which one has access to a number of observations from K distinct conditions. The goal
of the formulations is to estimate a graphical model for each condition under the assumption that the
K networks share certain characteristics [12, 13]. Suppose that X1k , . . . , Xnkk ? Rp are independent
and identically distributed from a N (0, ?k ) distribution, for k = 1, . . . , K. Letting Sk denote the
empirical covariance matrix for the kth class, one can maximize the penalized log likelihood
?
?
K
?
?
?
?
1
K
k
1
K
L(?
,
.
.
.
,
?
)
?
?
??
?
?
?
P
(?
,
.
.
.
,
?
)
, (2)
maximize
1
1
2
ij
ij
p
p
?
?
,...,?K ?S++
?1 ?S++
k=1
(
i?=j
)
k
k k
where L(?1 , . . . , ?K ) =
k=1 nk log det ? ? trace(S ? ) , ?1 and ?2 are nonnegative
tuning parameters, and P (?1ij , . . . , ?K
ij ) is a penalty applied to each off-diagonal element of
? 1, . . . , ?
? K that solve (2)
?1 , . . . , ?K in order to encourage similarity among them. Then the ?
1 ?1
K ?1
serve as estimates for (? ) , . . . , (? ) . In particular, [13] considered the use of
?
?
|?kij ? ?kij |,
P (?1ij , . . . , ?K
(3)
ij ) =
?K
k<k?
a fused lasso penalty [14] on the differences between pairs of network edges. When ?1 is large, the
network estimates will be sparse, and when ?2 is large, pairs of network estimates will have identical
edges. We refer to (2) with penalty (3) as the fused graphical lasso formulation (FGL).
Solving the FGL formulation allows for much more accurate network inference than simply learning
each of the K networks separately, because FGL borrows strength across all available observations
in estimating each network. But in doing so, it implicitly assumes that differences among the K
networks arise from edge perturbations. Therefore, this approach does not take full advantage of
the structure of the learning problem, which is that differences between the K networks are driven
by nodes that differ across networks, rather than differences in individual edges.
3 The perturbed-node joint graphical lasso
3.1 Why is detecting node perturbation challenging?
At first glance, the problem of detecting node perturbation seems simple: in the case K = 2, we
could simply modify (2) as follows,
?
?
p
?
?
?
(4)
??1j ? ?2j ?2 ,
maximize
L(?1 , ?2 ) ? ?1 ??1 ?1 ? ?1 ??2 ?1 ? ?2
p
p
?
?1 ?S++ ,?2 ?S++ ?
j=1
where ?kj is the jth column of the matrix ?k . This amounts to applying a group lasso [15] penalty
to the columns of ?1 ? ?2 . Since a group lasso penalty simultaneously shrinks all elements to
which it is applied to zero, it appears that this will give the desired node perturbation structure. We
will refer to this as the naive group lasso approach.
Unfortunately, a problem arises due to the fact that the optimization problem (4) must be performed
subject to a symmetry constraint on ?1 and ?2 . This symmetry constraint effectively imposes
overlap among the elements in the p group lasso penalties in (4), since the (i, j)th element of ?1 ?
?2 is in both the ith (row) and jth (column) groups. In the presence of overlapping groups, the
group lasso penalty yields estimates whose support is the complement of the union of groups [16, 17].
Figure 2 shows a simple example of (?1 )?1 ?(?2 )?1 in the case of node perturbation, as well as the
estimate obtained using (4). The figure reveals that (4) cannot be used to detect node perturbation,
since this task requires a penalty that yields estimates whose support is the union of groups.
3.2
Proposed approach
A node-perturbation in a GGM can be equivalently represented through a perturbation of the entries
of a row and column of the corresponding precision matrix (Figure 1). In other words, we can
3
Figure 2: A toy example with p = 6 variables, of which two are perturbed (in red). Each panel
shows an estimate of (?1 )?1 ? (?2 )?1 , displayed as a network and as an adjacency matrix. Shaded
? 1 ??
? 2 , as do edges in the network.
elements of the adjacency matrix indicate non-zero elements of ?
Results are shown for (a): PNJGL with q = 2, which gives the correct sparsity pattern; (b)-(c): the
naive group lasso. The naive group lasso is unable to detect the pattern of node perturbation.
detect a single node perturbation by looking for a row and a corresponding column of ?1 ? ?2
that has nonzero elements. We define a row-column group as a group that consists of a row and the
corresponding column in a matrix. Note that in a p ? p matrix, there exist p such groups, which
overlap. If several nodes of a GGM are perturbed, then this will correspond to the union of the
corresponding row-column groups in ?1 ? ?2 . Therefore, in order to detect node perturbations in
a GGM (Figure 1), we must construct a regularizer that can promote estimates whose support is the
union of row-column groups. For this task, we propose the row-column overlap norm as a penalty.
Definition 3.1. The row-column overlap norm (RCON) induced by a matrix norm f is defined as
?f (A) =
min
f (V).
(5)
V:A=V+VT
RCON satisfies the following properties that are easy to check: (1) ?f is indeed a norm. Consequently, it is convex. (2) When f is symmetric in its argument, i.e., f (V) = f (VT ), then
?f (A) = f (A)/2.
In this paper, we are interested in the particular class of RCON penalty where f is given by
p
?
f (V) =
?Vj ?q ,
(6)
j=1
where 1 ? q ? ?. The norm in (6) is known as the ?1 /?q norm since it can be interpreted as the
?1 norm of the ?q norms of the columns of a matrix. With a little abuse of notation, we will let ?q
denote ?f with an ?1 /?q norm of the form (6). We note that ?q is closely related to the overlap
group lasso penalty [17, 16], and in fact can be derived from it (for the case of q = 2). However,
our definition naturally and elegantly handles the grouping structure induced by the overlap of rows
and columns, and can accommodate any ?q norm with q ? 1, and more generally any norm f . As
discussed in [17], when applied to ?1 ? ?2 , the penalty ?q (with q = 2) will encourage the support
?1 ??
? 2 to be the union of a set of rows and columns.
of the matrix ?
Now, consider the task of jointly estimating two precision matrices by solving
{
}
maximize
L(?1 , ?2 ) ? ?1 ??1 ?1 ? ?1 ??2 ?1 ? ?2 ?q (?1 ? ?2 ) .
p
p
?1 ?S++ ,?2 ?S++
(7)
We refer to the convex optimization problem (7) as the perturbed-node joint graphical lasso (PNJGL) formulation. In (7), ?1 and ?2 are nonnegative tuning parameters, and q ? 1. Note that
f (V) = ?V?1 satisfies property 2 of the RCON penalty. Thus we have the following observation.
Remark 3.1. The FGL formulation (2) is a special case of the PNJGL formulation (7) with q = 1.
? 1, ?
? 2 be the optimal solution to (7). Note that the FGL formulation is an edge-based approach
Let ?
? 1 ??
? 2 to be set to zero. However, setting q = 2 or q = ?
that promotes many entries (or edges) in ?
?1 ??
? 2 is encouraged to be a union
in (7) gives us a node-based approach, where the support of ?
of a few rows and the corresponding columns [17, 16]. Thus the nodes that have been perturbed can
be clearly detected using PNJGL with q = 2, ?. An example of the sparsity structure detected by
PNJGL with q = 2 is shown in the left-hand panel of Figure 2. We note that the above formulation
RCON penalty
can be easily extended to the estimation of K > 2 GGMs by including K(K?1)
2
terms in (7), one for each pair of models. However we restrict ourselves to the case of K = 2 in this
paper.
4
4
An ADMM algorithm for the PNJGL formulation
The PNJGL optimization problem (7) is convex, and so can be directly solved in the modeling
environment cvx [18], which calls conic interior-point solvers such as SeDuMi or SDPT3. However, such a general approach does not fully exploit the structure of the problem and will not scale
well to large-scale instances. Other algorithms proposed for overlapping group lasso penalties
[19, 20, 21] do not apply to our setting since the PNJGL formulation has a combination of Gaussian
log-likelihood loss (instead of squared error loss) and the RCON penalty along with a positivedefinite constraint. We also note that other first-order methods are not easily applied to solve the
PNJGL formulation because the subgradient of the RCON is not easy to compute and in addition
the proximal operator to RCON is non-trivial to compute.
In this section we present a fast and scalable alternating directions method of multipliers (ADMM)
algorithm [22] to solve the problem (7). We first reformulate (7) by introducing new variables, so
as to decouple some of the terms in the objective function that are difficult to optimize jointly. This
will result in a simple algorithm with closed-form updates. The reformulation is as follows:
?
?
p
?
?
?
minimize
?L(?1 , ?2 ) + ?1 ?Z1 ?1 + ?1 ?Z2 ?1 + ?2
?Vj ?q
p
p
?
?1 ?S++ ,?2 ?S++ ,Z1 ,Z2 ,V,W ?
j=1
subject to
?1 ? ?2 = V + W, V = WT , ?1 = Z1 , ?2 = Z2 .
(8)
An ADMM algorithm can now be obtained in a standard fashion from the augmented Lagrangian
to (8). We defer the details to a longer version of this paper. The complete algorithm for (8) is given
in Algorithm 1, in which the operator Expand is given by
?
(
)
} 1
{
2nk
2
Expand(A, ?, nk ) = argmin ?nk log det(?) + ??? ? A?F = U D + D2 +
I UT ,
p
2
?
??S++
where UDUT is the eigenvalue decomposition of A, and as mentioned earlier, nk is the number of
observations in the kth class. The operator Tq is given by
?
?
p
?
?1
?
?Xj ?q ,
Tq (A, ?) = argmin
?X ? A?2F + ?
?
?2
X
j=1
and is also known as the proximal operator corresponding to the ?1 /?q norm. For q = 1, 2, ?, Tq
takes a simple form, which we omit here due to space constraints. A description of these operators
can also be found in Section 5 of [25].
Algorithm 1 can be interpreted as an approximate dual gradient ascent method. The approximation
is due to the fact that the gradient of the dual to the augmented Lagrangian in each iteration is
computed inexactly, through a coordinate descent cycling through the primal variables.
Typically ADMM algorithms iterate over only two groups of primal variables. For such algorithms,
the convergence properties are well-known (see e.g. [22]). However, in our case we cycle through
more than two such groups. Although investigation of the convergence properties of ADMM algorithms for an arbitrary number of groups is an ongoing research area in the optimization literature
[23, 24] and specific convergence results for our algorithm are not known, we empirically observe
very good convergence behavior. Further study of this issue is a direction for future work.
We initialize the primal variables to the identity matrix, and the dual variables to the matrix of zeros.
We set ? = 5, and tmax = 1000. In our implementation, the stopping criterion is that the difference
between consecutive iterates becomes smaller than a tolerance ?. The ADMM algorithm is orders
of magnitude faster than an interior point method and also comparable in accuracy. Note that the
per-iteration complexity of the ADMM algorithm is O(p3 ) (complexity of computing SVD). On
the other hand, the complexity of an interior point method is O(p6 ). When p = 30, the interior
point method (using cvx, which calls Sedumi) takes 7 minutes to run while ADMM takes only
10 seconds. When p = 50, the times are 3.5 hours and 2 minutes, respectively. Also, we observe
that the average error between the cvx and ADMM solution when averaged over many random
generations of the data is of O(10?4 ).
5
Algorithm 1: ADMM algorithm for the PNJGL optimization problem (7)
input: ? > 0, ? > 1, tmax > 0, ? > 0;
for t = 1:tmax do
? ? ?? ;
while Not converged
)
( do
1
1
(Q1 + n1 S1 + F), ?, n1 ;
? ? Expand 12 (?2 + V + W + Z1 ) ? 2?
(
)
1
?2 ? Expand 21 (?1 ? (V + W) + Z2 ) ? 2?
(Q2 + n2 S2 ? F), ?, n2 ;
(
)
Zi ? T1 ?i + Q?i , ??1 for i = 1, 2 ;
(
)
1
V ? Tq 12 (WT ? W + (?1 ? ?2 )) + 2?
(F ? G), ?2?2 ;
1
W ? 12 (VT ? V + (?1 ? ?2 )) + 2?
(F + GT ) ;
1
2
F ? F + ?(? ? ? ? (V + W)) ;
G ? G + ?(V ? WT );
Qi ? Qi + ?(?i ? Zi ) for i = 1, 2
5
Experiments
We describe experiments and report results on both synthetically generated data and real data.
5.1
Synthetic experiments
Synthetic data generation. We generated two networks as follows. The networks share individual
edges as well as hub nodes, or nodes that are highly-connected to many other nodes. There are also
perturbed nodes that differ between the networks. We first create a p ? p symmetric matrix A, with
diagonal elements equal to one. For i < j, we set
{
0
with probability 0.98
Aij ?i.i.d.
,
Unif([?0.6, ?0.3] ? [0.3, 0.6]) otherwise
and then we set Aji to equal Aij . Next, we randomly selected seven hub nodes, and set the elements
of the corresponding rows and columns to be i.i.d. from a Unif([?0.6, ?0.3]?[0.3, 0.6]) distribution.
These steps resulted in a background pattern of structure common to both networks. Next, we copied
A into two matrices, A1 and A2 . We randomly selected m perturbed nodes that differ between A1
and A2 , and set the elements of the corresponding row and column of either A1 or A2 (chosen at
random) to be i.i.d. draws from a Unif([?1.0, ?0.5] ? [0.5, 1.0]) distribution. Finally, we computed
c = min(?min (A1 ), ?min (A2 )), the smallest eigenvalue of A1 and A2 . We then set (?1 )?1 equal
to A1 + (0.1 ? c)I and set (?2 )?1 equal to A2 + (0.1 ? c)I. This last step is performed in order to
ensure positive definiteness. We generated n independent observations each from a N (0, ?1 ) and a
N (0, ?2 ) distribution, and used these to compute the empirical covariance matrices S1 and S2 . We
compared the performances of graphical lasso, FGL, and PNJGL with q = 2 with p = 100, m = 2,
and n = {10, 25, 50, 200}.
Results. Results (averaged over 100 iterations) are shown in Figure 3. Increasing n yields more
accurate results for PNJGL with q = 2, FGL, and graphical lasso. Furthermore, PNJGL with q = 2
identifies non-zero edges and differing edges much more accurately than does FGL, which is in turn
more accurate than graphical lasso. PNJGL also leads to the most accurate estimates of ?1 and ?2 .
The extent to which PNJGL with q = 2 outperforms others is more apparent when n is small.
5.2
Inferring biological networks
We applied the PNJGL method to a recently-published cancer gene expression data set [26], with
mRNA expression measurements for 11,861 genes in 220 patients with glioblastoma multiforme
(GBM), a brain cancer. Each patient has one of four distinct clinical subtypes: Proneural, Neural,
Classical, and Mesenchymal. We selected two subtypes ? Proneural (53 patients) and Mesenchymal
6
Figure 3: Simulation study results for PNJGL with q = 2, FGL, and the graphical lasso (GL),
for (a) n = 10, (b) n = 25, (c) n = 50, (d) n = 200, when p = 100. Within each panel,
each line corresponds to a fixed value of ?2 (for PNJGL with q = 2 and for FGL). Each plot?s
x-axis denotes the number of edges estimated to be non-zero. The y-axes are as follows. Left:
Number of edges correctly estimated to be non-zero. Center: Number of edges correctly estimated
to differ across networks, divided by the number of edges estimated to differ across networks. Right:
?
1
1 2 1/2
The Frobenius norm of the error in the estimated precision matrices, i.e. ( i?=j (?ij
? ??ij
) )
+
?
2
2 2 1/2
( i?=j (?ij
? ??ij
) ) .
(56 patients) ? for our analysis. In this experiment, we aim to reconstruct the gene regulatory
networks of the two subtypes, as well as to identify genes whose interactions with other genes vary
significantly between the subtypes. Such genes are likely to have many somatic (cancer-specific)
mutations. Understanding the molecular basis of these subtypes will lead to better understanding of
brain cancer, and eventually, improved patient treatment. We selected the 250 genes with the highest
within-subtype variance, as well as 10 genes known to be frequently mutated across the four GBM
subtypes [26]: TP53, PTEN, NF1, EGFR, IDH1, PIK3R1, RB1, ERBB2, PIK3CA, PDGFRA. Two
of these genes (EGFR, PDGFRA) were in the initial list of 250 genes selected based on the withinsubtype variance. This led to a total of 258 genes. We then applied PNJGL with q = 2 and FGL
to the resulting 53 ? 258 and 56 ? 258 gene expression datasets, after standardizing each gene to
have variance one. Tuning parameters were selected so that each approach results in a per-network
estimate of approximately 6,000 non-zero edges, as well as approximately 4,000 edges that differ
7
across the two network estimates. However, the results that follow persisted across a wide range of
tuning parameter values.
Figure 4: PNJGL with q = 2 and FGL were performed on the brain cancer data set corresponding
to 258 genes in patients with Proneural and Mesenchymal subtypes. (a)-(b): N Pj is plotted for each
?1??
?2
gene, based on (a) the FGL estimates and (b) the PNJGL estimates. (c)-(d): A heatmap of ?
is shown for (c) FGL and (d) PNJGL; zero values are in white, and non-zero values are in black.
We quantify the extent of node perturbation (NP) in the network estimates as follows: N Pj =
?
1 ?1
?2
i |Vij |; for FGL we get V from the PNJGL formulation as 2 (? ? ? ). If N Pj = 0 (using a zero?6
threshold of 10 ), then the jth gene has the same edge weights in the two conditions. In Figure 4(a)(b), we plotted the resulting values for each of the 258 genes in FGL and PNJGL. Although the
network estimates resulting from PNJGL and FGL have approximately the same number of edges
that differ across cancer subtypes, PNJGL results in estimates in which only 37 genes appear to have
node perturbation. FGL results in estimates in which all 258 genes appear to have node perturbation.
? 1 ??
? 2 for FGL and for PNJGL are displayed. Clearly,
In Figure 4(c)-(d), the non-zero elements of ?
the pattern of network differences resulting from PNJGL is far more structured. The genes known
to be frequently mutated across GBM subtypes are somewhat enriched out of those that appear to be
perturbed according to the PNJGL estimates (3 out of 10 mutated genes were detected by PNJGL; 37
out of 258 total genes were detected by PNJGL; hypergeometric p-value = 0.1594). In contrast, FGL
detects every gene as having node perturbation (Figure 4(a)). The gene with the highest N Pj value
(according to both FGL and PNJGL with q = 2) is CXCL13, a small cytokine that belongs to the
CXC chemokine family. Together with its receptor CXCR5, it controls the organization of B-cells
within follicles of lymphoid tissues. This gene was not identified as a frequently mutated gene in
GBM [26]. However, there is recent evidence that CXCL13 plays a critical role in driving cancerous
pathways in breast, prostate, and ovarian tissue [27, 28]. Our results suggest the possibility of a
previously unknown role of CXCL13 in brain cancer.
6
Discussion and future work
We have proposed the perturbed-node joint graphical lasso, a new approach for jointly learning
Gaussian graphical models under the assumption that network differences result from node perturbations. We impose this structure using a novel RCON penalty, which encourages the differences
between the estimated networks to be the union of just a few rows and columns. We solve the resulting convex optimization problem using ADMM, which is more efficient and scalable than standard
interior point methods. Our proposed approach leads to far better performance on synthetic data
than two alternative approaches: learning Gaussian graphical models assuming edge perturbation
[13], or simply learning each model separately. Future work will involve other forms of structured
sparsity beyond simply node perturbation. For instance, if certain subnetworks are known a priori
to be related to the conditions under study, then the RCON penalty can be modified in order to encourage some subnetworks to be perturbed across the conditions. In addition, the ADMM algorithm
described in this paper requires computation of the eigen decomposition of a p ? p matrix at each
iteration; we plan to develop computational improvements that mirror recent results on related problems in order to reduce the computations involved in solving the FGL optimization problem [6, 13].
Acknowledgments D.W. was supported by NIH Grant DP5OD009145, M.F. was supported in part
by NSF grant ECCS-0847077.
8
References
[1] K.V. Mardia, J. Kent, and J.M. Bibby. Multivariate Analysis. Academic Press, 1979.
[2] S.L. Lauritzen. Graphical Models. Oxford Science Publications, 1996.
[3] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika,
94(10):19?35, 2007.
[4] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso.
Biostatistics, 9:432?441, 2007.
[5] O. Banerjee, L. E. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood
estimation for multivariate Gaussian or binary data. JMLR, 9:485?516, 2008.
[6] D.M. Witten, J.H. Friedman, and N. Simon. New insights and faster computations for the graphical lasso.
Journal of Computational and Graphical Statistics, 20(4):892?900, 2011.
[7] K. Scheinberg, S. Ma, and D. Goldfarb. Sparse inverse covariance selection via alternating linearization
methods. Advances in Neural Information Processing Systems, 2010.
[8] P. Ravikumar, M.J. Wainwright, G. Raskutti, and B. Yu. Model selection in gaussian graphical models:
high-dimensional consistency of l1-regularized MLE. Advances in NIPS, 2008.
[9] C.J. Hsieh, M. Sustik, I. Dhillon, and P. Ravikumar. Sparse inverse covariance estimation using quadratic
approximation. Advances in Neural Information Processing Systems, 2011.
[10] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
Series B, 58:267?288, 1996.
[11] A. D?Aspremont, O. Banerjee, and L. El Ghaoui. First-order methods for sparse covariance selection.
SIAM Journal on Matrix Analysis and Applications, 30(1):56?66, 2008.
[12] J. Guo, E. Levina, G. Michailidis, and J. Zhu. Joint estimation of multiple graphical models. Biometrika,
98(1):1?15, 2011.
[13] P. Danaher, P. Wang, and D. Witten. The joint graphical lasso for inverse covariance estimation across
multiple classes, 2012. http://arxiv.org/abs/1111.0324.
[14] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso.
Journal of the Royal Statistical Society, Series B, 67:91?108, 2005.
[15] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society, Series B, 68:49?67, 2007.
[16] L. Jacob, G. Obozinski, and J.P. Vert. Group lasso with overlap and graph lasso. Proceedings of the 26th
International Conference on Machine Learning, 2009.
[17] G. Obozinski, L. Jacob, and J.P. Vert. Group lasso with overlaps: the latent group lasso approach. 2011.
http://arxiv.org/abs/1110.0413.
[18] M. Grant and S. Boyd. cvx version 1.21. ?http://cvxr.com/cvx?, October 2010.
[19] A. Argyriou, C.A. Micchelli, and M. Pontil. Efficient first order methods for linear composite regularizers.
2011. http://arxiv.org/pdf/1104.1436.
[20] X. Chen, Q. Lin, S. Kim, J.G. Carbonell, and E.P. Xing. Smoothing proximal gradient method for general
structured sparse learning. Proceedings of the conference on Uncertainty in Artificial Intelligence, 2011.
[21] S. Mosci, S. Villa, A. Verri, and L. Rosasco. A primal-dual algorithm for group sparse regularization with
overlapping groups. Neural Information Processing Systems, pages 2604 ? 2612, 2010.
[22] S.P. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Foundations and Trends in ML, 3(1):1?122, 2010.
[23] M. Hong and Z. Luo. On the linear convergence of the alternating direction method of multipliers. 2012.
Available at arxiv.org/abs/1208.3922.
[24] B. He, M. Tao, and X. Yuan. Alternating direction method with gaussian back substitution for separable
convex programming. SIAM Journal of Optimization, pages 313 ? 340, 2012.
[25] J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. Journal of
Machine Learning Research, pages 2899 ? 2934, 2009.
[26] Verhaak et al. Integrated genomic analysis identifies clinically relevant subtypes of glioblastoma characterized by abnormalities in PDGFRA, IDH1, EGFR, and NF1. Cancer Cell, 17(1):98?110, 2010.
[27] Grosso et al. Chemokine CXCL13 is overexpressed in the tumour tissue and in the peripheral blood of
breast cancer patients. British Journal Cancer, 99(6):930?938, 2008.
[28] El-Haibi et al. CXCL13-CXCR5 interactions support prostate cancer cell migration and invasion in a
PI3K p110-, SRC- and FAK-dependent fashion. The Journal of Immunology, 15(19):5968?73, 2009.
9
| 4499 |@word version:2 norm:17 seems:1 unif:3 d2:1 simulation:1 covariance:9 decomposition:2 kent:1 q1:1 hsieh:1 jacob:2 accommodate:1 initial:1 substitution:1 series:3 selecting:1 egfr:3 outperforms:2 aberrant:1 z2:4 com:1 luo:1 chu:1 must:2 stemming:1 plot:1 update:1 intelligence:1 selected:6 ith:1 detecting:3 iterates:1 node:42 org:4 along:1 differential:1 yuan:3 consists:1 pathway:2 mosci:1 indeed:1 behavior:1 themselves:1 frequently:3 brain:10 detects:1 company:1 little:1 positivedefinite:1 solver:1 increasing:1 becomes:1 estimating:6 underlying:3 notation:1 panel:3 biostatistics:2 argmin:2 interpreted:2 q2:1 differing:2 nf1:2 every:1 biometrika:2 control:1 subtype:1 grant:3 omit:1 appear:3 positive:4 t1:1 engineering:3 ecc:1 modify:1 pdgfra:3 receptor:1 oxford:1 abuse:1 approximately:3 might:1 tmax:3 black:1 shaded:2 challenging:1 idh1:2 range:1 averaged:2 fazel:1 acknowledgment:1 union:7 glioblastoma:2 definite:2 aji:1 pontil:1 area:1 empirical:4 significantly:2 vert:2 boyd:2 composite:1 word:1 specificity:1 suggest:1 get:1 cannot:1 interior:5 selection:7 operator:5 applying:1 optimize:1 lagrangian:2 center:1 maximizing:1 mrna:1 convex:11 splitting:1 insight:1 handle:1 coordinate:1 suppose:3 play:1 programming:1 fak:1 element:11 trend:1 role:2 yoon:1 electrical:1 solved:1 wang:1 cycle:1 connected:1 highest:2 knight:1 src:1 disease:1 substantial:1 principled:1 environment:1 mentioned:1 complexity:3 solving:4 serve:1 upon:1 basis:3 easily:2 joint:6 stock:2 represented:1 regularizer:1 univ:4 distinct:4 fast:1 describe:1 detected:5 artificial:1 saunders:1 quite:1 whose:4 widely:1 solve:7 larger:1 apparent:1 otherwise:1 reconstruct:1 statistic:1 jointly:5 online:1 advantage:1 eigenvalue:2 propose:3 maryam:1 interaction:2 relevant:1 rb1:1 description:1 frobenius:1 convergence:5 extending:1 develop:1 ij:10 lauritzen:1 solves:2 c:1 involves:1 indicate:2 quantify:1 differ:13 direction:7 closely:1 correct:1 mesenchymal:3 human:1 adjacency:4 investigation:1 biological:1 subtypes:10 sufficiently:1 considered:2 normal:1 driving:1 pi3k:1 vary:1 commonality:1 consecutive:1 a2:6 smallest:1 purpose:1 gbm:4 estimation:10 grouped:1 create:1 tool:1 clearly:2 genomic:1 gaussian:10 aim:1 modified:1 rather:1 shrinkage:1 publication:1 derived:1 ax:1 cancerous:1 improvement:1 likelihood:5 check:1 contrast:1 kim:1 detect:5 inference:1 dependent:1 stopping:1 el:3 unlikely:1 typically:1 integrated:1 expand:4 interested:3 tao:1 overall:1 among:5 dual:4 issue:1 priori:1 heatmap:1 plan:1 special:1 initialize:1 smoothing:1 field:1 construct:1 equal:4 having:1 washington:5 encouraged:2 biology:1 identical:2 yu:1 promote:1 future:3 report:1 others:1 np:1 prostate:2 few:4 randomly:2 simultaneously:1 resulted:1 individual:3 ourselves:1 karthik:1 n1:5 ab:3 tq:4 friedman:2 organization:1 interest:1 highly:1 possibility:1 primal:4 regularizers:1 accurate:4 edge:24 encourage:3 sedumi:2 desired:1 plotted:2 instance:3 column:21 modeling:2 kij:2 earlier:1 bibby:1 introducing:1 entry:2 successful:1 motivating:1 perturbed:14 proximal:3 synthetic:6 rosset:1 migration:1 disrupted:1 fundamental:1 siam:2 international:1 immunology:1 lee:1 probabilistic:1 off:1 michael:1 together:1 fused:4 connectivity:2 squared:1 rosasco:1 chung:1 toy:2 standardizing:1 invasion:1 performed:3 closed:1 doing:1 red:1 xing:1 lung:4 defer:1 simon:1 mutation:2 minimize:1 ggm:4 accuracy:1 variance:3 characteristic:1 correspond:2 yield:3 identify:1 mutated:4 accurately:1 published:1 tissue:4 converged:1 definition:2 involved:1 naturally:1 associated:1 treatment:1 ut:1 organized:1 back:1 appears:1 follow:1 improved:1 entrywise:1 formulation:15 verri:1 shrink:1 furthermore:1 just:1 p6:1 hand:2 su:1 suinlee:1 overlapping:3 glance:1 banerjee:2 scientific:1 fgl:22 multiplier:5 true:1 regularization:1 alternating:7 symmetric:2 nonzero:1 goldfarb:1 dhillon:1 illustrated:1 white:1 encourages:1 criterion:1 hong:1 pdf:1 complete:1 duchi:1 l1:1 pten:1 novel:1 recently:1 parikh:1 nih:1 common:1 raskutti:1 witten:3 empirically:1 exponentially:1 discussed:1 he:1 measurement:2 significant:1 refer:4 cytokine:1 smoothness:1 tuning:5 consistency:1 language:1 mfazel:1 access:2 han:1 acute:1 similarity:1 longer:1 gt:1 multivariate:2 recent:4 belongs:1 driven:1 certain:5 binary:1 vt:3 somewhat:1 impose:1 maximize:5 multiple:6 full:1 stem:1 danaher:1 faster:2 academic:1 levina:1 clinical:1 characterized:1 lin:3 divided:1 molecular:1 ravikumar:2 promotes:1 a1:6 mle:1 qi:2 scalable:2 regression:2 breast:2 vision:1 patient:13 arxiv:4 iteration:4 cell:4 proposal:4 background:2 addition:2 separately:2 singular:1 rest:1 ascent:1 subject:2 induced:2 call:2 presence:2 synthetically:1 abnormality:1 identically:2 easy:2 variety:1 independence:1 xj:1 iterate:1 zi:2 hastie:1 lasso:32 restrict:1 identified:1 follicle:1 reduce:1 michailidis:1 det:3 expression:6 sdpt3:1 etiology:1 x1k:1 penalty:22 remark:1 generally:1 involve:1 amount:2 http:4 exist:1 nsf:1 neuroscience:1 estimated:6 per:2 correctly:2 ovarian:1 tibshirani:3 group:26 four:2 reformulation:1 threshold:1 blood:1 pj:4 backward:1 uw:3 graph:1 subgradient:1 run:1 inverse:4 uncertainty:1 family:1 cvx:5 p3:1 draw:1 comparable:1 copied:1 quadratic:1 paramount:1 nonnegative:2 activity:1 strength:1 constraint:4 x2:2 aspect:1 argument:1 min:4 separable:1 structured:8 according:2 p110:1 peripheral:1 combination:1 clinically:1 across:15 smaller:1 increasingly:1 s1:2 ghaoui:2 previously:1 scheinberg:1 daniela:1 turn:1 mechanism:1 eventually:1 singer:1 letting:1 serf:1 subnetworks:2 sustik:1 available:5 apply:1 observe:2 alternative:2 batch:1 eigen:1 rp:2 denotes:2 assumes:1 ensure:1 graphical:30 exploit:2 especially:1 classical:1 society:3 micchelli:1 objective:1 diagonal:2 villa:1 cycling:1 tumour:1 gradient:3 kth:2 separate:1 unable:1 carbonell:1 seven:1 extent:2 trivial:1 assuming:1 relationship:1 reformulate:1 equivalently:1 difficult:1 unfortunately:1 october:1 trace:2 implementation:1 unknown:1 allowing:1 observation:8 neuron:1 datasets:1 descent:1 displayed:2 extended:1 looking:1 persisted:1 perturbation:24 somatic:2 arbitrary:1 peleato:1 complement:1 pair:4 eckstein:1 z1:4 hypergeometric:1 hour:1 nip:1 beyond:1 pattern:4 sparsity:5 including:1 royal:3 wainwright:1 overlap:11 critical:1 natural:1 regularized:1 zhu:2 conic:1 identifies:2 axis:1 aspremont:2 naive:3 kj:1 literature:2 understanding:2 pik3r1:1 fully:1 loss:2 generation:2 borrows:1 foundation:1 undergone:1 imposes:1 vij:1 share:2 row:18 cancer:24 penalized:2 gl:1 last:1 supported:2 jth:3 aij:2 wide:1 sparse:9 distributed:3 tolerance:1 xn:1 world:1 genome:1 author:1 forward:1 far:2 approximate:1 implicitly:1 gene:39 ml:1 reveals:1 regulatory:9 latent:1 sk:1 why:1 symmetry:2 complex:1 domain:1 vj:2 elegantly:1 s2:2 arise:2 n2:5 cvxr:1 jae:1 x1:1 augmented:2 enriched:1 fashion:2 definiteness:1 precision:3 fails:1 inferring:2 wish:1 mardia:1 jmlr:1 minute:2 british:1 specific:6 hub:2 list:1 evidence:1 grouping:1 effectively:2 importance:1 mirror:1 magnitude:1 mohan:1 linearization:1 illustrates:1 nk:5 chen:1 led:1 simply:5 likely:4 corresponds:2 satisfies:2 inexactly:1 ma:1 obozinski:2 conditional:1 goal:1 formulated:1 identity:1 consequently:2 shared:1 admm:14 change:2 specifically:2 except:1 wt:3 decouple:1 tumor:1 total:2 experimental:1 svd:1 ggms:3 support:6 guo:1 arises:3 ongoing:1 argyriou:1 |
3,865 | 45 | 397
AN OPTIMIZATION NETWORK FOR MATRIX INVERSION
Ju-Seog Jang, S~ Young Lee, and Sang-Yung Shin
Korea Advanced Institute of Science and Technology,
P.O. Box 150, Cheongryang, Seoul, Korea
ABSTRACT
Inverse matrix calculation can be considered as an optimization. We have
demonstrated that this problem can be rapidly solved by highly interconnected
simple neuron-like analog processors. A network for matrix inversion based on
the concept of Hopfield's neural network was designed, and implemented with
electronic hardware. With slight modifications, the network is readily applicable to
solving a linear simultaneous equation efficiently. Notable features of this circuit
are potential speed due to parallel processing, and robustness against variations of
device parameters.
INTRODUCTION
Highly interconnected simple analog processors which mmnc a biological
neural network are known to excel at certain collective computational tasks. For
example, Hopfield and Tank designed a network to solve the traveling salesman
problem which is of the np -complete class,l and also designed an AID converter
of novel architecture2 based on the Hopfield's neural network model?' 4 The network could provide good or optimum solutions during an elapsed time of only a
few characteristic time constants of the circuit.
The essence of collective computation is the dissipative dynamics in which initial voltage configurations of neuron-like analog processors evolve simultaneously
and rapidly to steady states that may be interpreted as optimal solutions. Hopfield
has constructed the computational energy E (Liapunov function), and has shown
that the energy function E of his network decreases in time when coupling coefficients are symmetric. At the steady state E becomes one of local minima.
In this paper we consider the matrix inversion as an optimization problem,
and apply the concept of the Hopfield neural network model to this problem.
CONSTRUCTION OF THE ENERGY FUNCTIONS
Consider a matrix equation AV=I, where A is an input n Xn matrix, V is
the unknown inverse matrix, and I is the identity matrix. Following Hopfield we
define n energy functions E Ie' k = 1, 2, ... , n,
n
E 1 = (1I2)[(~ A 1j V j1 -1)2
)-1
n
E2 =
n
n
j-1
)-1
+ (~A2) V j1 )2 + ... + (~Anj Vj1)2]
n
(1/2)[(~A1)V)2l + (~A2)V)2-1)2 +
)=1
)=1
? American Institute of Physics 1988
n
+ (~An)V}2)2]
}-1
398
n
En =
n
n
(1/2)[(~ A1J VJn)2 + (~A2J V jn )2 + ... + (~An) VJn _1)2]
j=l
(1)
J-1
}=1
where AiJ and ViL.are the elements of ith row and jth column of matrix A and
V, respectively. when A is a nonsingular matrix, the minimum value (=zero) of
each energy function is unique and is located at a point in the corresponding
hyperspace whose coordinates are { V u:, V 2k ' " ' , V nk }, k = 1, 2, "', n. At
this minimum value of each energy function the values of V 11' V 12' ... , Vnn
become the elements of the inverse matrix A -1. When A is a singular matrix the
minimum value (in general, not zero) of each energy function is not unique and is
located on a contour line of the minimum value. Thus, if we construct a model
network in which initial voltage configurations of simple analog processors, called
neurons, converge simultaneously and rapidly to the minimum energy point, we can
say the network have found the optimum solution of matrix inversion problem.
The optimum solution means that when A is a nonsingular matrix the result is the
inverse matrix that we want to know, and when A is a singular matrix the result
is a solution that is optimal in a least-square sense of Eq. (1).
DESIGN OF THE NETWORK AND THE HOPFIELD MODEL
Designing the network for matrix inversion, we use the Hopfield model
without inherent loss terms, that is,
a
- - = ---Ek(V 11' V 2k' . . . , V nk )
dt
aV ik
(2)
i,k=1,2, ... ,n
where u ik is the input voltage of ith neuron in the kth network, V ik is its output,
and the function gik is the input-output relationship. But the neurons of this
scheme operate in all the regions of gik differently from Hopfield's nonlinear 2state neurons of associative memory models.3 ? 4
From Eq. (1) and Eq. (2), we can define coupling coefficients Tij between
ith and jth neurons and rewrite Eq. (2) as
n
n
- - = - ~ TiJ V)k
dt
j=l
+
Aki ,
TiJ =
~ AliAIJ
= Tji
'
1=1
(3)
It may be noted that Ti ? is independent of k and only one set of hardware is
needed for all k. The implemented network is shown in Fig. 1. The same set of
n
hardware with bias levels, ~ A Ji h), can be used to solve a linear simultaneous
)=1
399
equation represented by Ax=b for a given vector b.
INPUT
OUTPUT
Fig. 1. Implemented network for matrix inversion with externally
controllable coupling coefficients. Nonlinearity between
the input and the output of neurons is assumed to be
distributed in the adder and the integrator.
The application of the gradient Hopfield model to this problem gives the result
that is similar to the steepest descent method. s But the nonlinearity between the
input and the output of neurons is introduced. Its effect to the computational
capability will be considered next.
CHARACTERISTICS OF THE NETWORK
For a simple case of 3 x3 input matrices the network is implemented with
electronic hardware and its dynamic behavior is simulated by integration of the
Eq. (3). For nonsingular input matrices, exact realization of Tij connection and
bias Ali is an important factor for calculation accuracy, but the initial condition
and other device parameters such as steepness, shape and uniformity of gil are
not. Even a complex gik function shown in Fig. 2 can not affect the computational capability. Convergence time of the output state is determined by the
characteristic time constant of the circuit. An example of experimental results is
shown in Fig. 3. For singular input matrices, the converged output voltage configuration of the network is dependent upon the initial state and the shape of gil'
400
,...-_ _ _ _ _
Vm-t-_ _ _--::==----r Aik >1
=1
<1
Vm
Ui\<
Fig. 2. gile functions used in computer simulations where
Aile is the steepness of sigmoid function tanh (Aile uile)'
input
A=
matrix
[-I1 2r 1I]
0.5 -I
(cf) A-I = [
1 0-1
0
1
-o.~]
0.5 -I -1.5
0.50 -0.98 -0.49J
output
V = [ 0.02 0.99 1.00
matrix
0.53 -0.98 - 1.50
0.5
o
?
Fig. 3. An example of experimental results
401
COMPLEXITY ANALYSIS
By counting operations we compare the neural net approach with other wellknown methods such as Triangular-decomposition and Gauss-Jordan elimination.6
(1) Triangular-decomposition or Gauss-Jordan elimination method takes 0 (8n 3/3)
multiqlications/divisions and additions for large n Xn matrix inversion, and
o (2n /3) multiplications/divisions and additions for solving the linear simultaneous
equation Ax=b.
(2) The neural net approach takes the number of operations required to calculate
Tij (nothing but matrix-matrix multiplication), that is, 0 (n 3 /2) multiplications and
additions for both matrix inversion and solving the linear simultaneous equation.
And the time required for output stablization is about a few times the characteristic time constant of the network. The calculation of coupling coefficients can
be directly executed without multiple iterations by a specially designed optical
matrix-matrix multiplier,' while the calculation of bias values in solving a linear
simultaneous equation can be done by an optical vector-matrix multiplier. 8 Thus,
this approach has a definite advantage in potential calculation speed due to global
interconnection of simple parallel analog processors, though its calculation accuracy may be limited by the nature of analog computation. A large number of
controllable Tij interconnections may be easily realized with optoelectronic devices.9
CONCLUSIONS
We have designed and implemented a matrix inversion network based on the
concept of the Hopfield's neural network model. 1bis network is composed of
highly interconnected simple neuron-like analog processors which process the information in parallel. The effect of sigmoid or complex nonlinearities on the computational capability is unimportant in this problem. Steep sigmoid functions reduce
only the convergence time of the network. When a nonsingular matrix is given as
an input, the network converges spontaneously and rapidly to the correct inverse
matrix regardless of initial conditions. When a singular matrix is given as an
input, the network gives a stable optimum solution that depends upon initial conditions of the network.
REFERENCES
1. J. J. Hopfield and D. W. Tank, BioI. Cybern. 52, 141 (1985).
2.
3.
4.
5.
6.
D. W. Tank and J. J. Hopfield, IEEE Trans. Circ. Sys. CAS-33, 533 (1986).
J. J. Hopfield, Proc. Natl. Acad. Sci. U.S.A. 79, 2554 (1982).
J. J. Hopfield, Proc. Natl. Acad. Sci. U.S.A. 81 , 3088 (1984).
G. A. Bekey and W. J. Karplus, Hybrid Computation (Wiley, 1968), P. 244.
M . J. Maron, Numerical Analysis: A Practical Approach (Macmillan, 1982),
p. 138.
7. H . Nakano and K. Hotate, Appl. Opt. 26, 917 (1987).
8. J. W. Goodman, A. R. Dias, and I. M. Woody, Opt. Lett. ~ 1 (1978).
9. J. W. Goodman, F. J. Leonberg, S-Y. Kung, and R. A. Athale, IEEE Proc.
72, 850 (1984).
| 45 |@word effect:2 implemented:5 concept:3 multiplier:2 inversion:9 symmetric:1 correct:1 tji:1 realized:1 simulation:1 i2:1 decomposition:2 during:1 gradient:1 elimination:2 aki:1 essence:1 noted:1 steady:2 kth:1 simulated:1 sci:2 initial:6 configuration:3 opt:2 biological:1 complete:1 vnn:1 considered:2 relationship:1 novel:1 architecture2:1 readily:1 sigmoid:3 executed:1 a1j:1 numerical:1 steep:1 j1:2 ji:1 shape:2 a2:2 woody:1 designed:5 analog:7 proc:3 slight:1 applicable:1 design:1 device:3 tanh:1 liapunov:1 collective:2 unknown:1 av:2 neuron:10 sys:1 ith:3 steepest:1 descent:1 nonlinearity:2 stable:1 vj1:1 constructed:1 voltage:4 become:1 ik:3 introduced:1 a2j:1 ax:2 required:2 connection:1 wellknown:1 certain:1 elapsed:1 behavior:1 sense:1 trans:1 integrator:1 dependent:1 minimum:6 converge:1 becomes:1 i1:1 memory:1 tank:3 circuit:3 multiple:1 anj:1 hybrid:1 interpreted:1 calculation:6 integration:1 advanced:1 construct:1 scheme:1 technology:1 a1:1 ti:1 excel:1 np:1 iteration:1 inherent:1 few:2 composed:1 simultaneously:2 addition:3 want:1 evolve:1 local:1 multiplication:3 loss:1 singular:4 acad:2 bekey:1 goodman:2 operate:1 specially:1 highly:3 dissipative:1 jordan:2 appl:1 limited:1 counting:1 bi:1 natl:2 row:1 unique:2 practical:1 spontaneously:1 affect:1 converter:1 jth:2 definite:1 reduce:1 x3:1 korea:2 aij:1 bias:3 institute:2 shin:1 karplus:1 distributed:1 lett:1 xn:2 contour:1 column:1 cybern:1 tij:6 unimportant:1 demonstrated:1 global:1 regardless:1 hardware:4 assumed:1 ju:1 gil:2 ie:1 his:1 lee:1 vm:2 physic:1 nature:1 variation:1 coordinate:1 ca:1 controllable:2 steepness:2 construction:1 nakano:1 aik:1 exact:1 complex:2 designing:1 element:2 located:2 american:1 ek:1 sang:1 nothing:1 potential:2 nonlinearities:1 inverse:5 fig:6 solved:1 en:1 calculate:1 coefficient:4 region:1 aile:2 notable:1 electronic:2 aid:1 depends:1 wiley:1 decrease:1 ui:1 complexity:1 parallel:3 capability:3 young:1 dynamic:2 externally:1 uniformity:1 solving:4 rewrite:1 square:1 ali:1 accuracy:2 upon:2 vjn:2 division:2 characteristic:4 efficiently:1 nonsingular:4 gik:3 easily:1 hopfield:15 differently:1 speed:2 represented:1 vil:1 optical:2 processor:6 converged:1 nk:2 simultaneous:5 whose:1 against:1 solve:2 energy:8 say:1 circ:1 interconnection:2 modification:1 triangular:2 e2:1 yung:1 macmillan:1 associative:1 advantage:1 equation:6 net:2 bioi:1 identity:1 interconnected:3 needed:1 know:1 dia:1 realization:1 rapidly:4 dt:2 salesman:1 operation:2 determined:1 apply:1 done:1 box:1 though:1 optoelectronic:1 called:1 gauss:2 experimental:2 jang:1 convergence:2 traveling:1 optimum:4 robustness:1 adder:1 jn:1 nonlinear:1 converges:1 cf:1 seoul:1 kung:1 coupling:4 maron:1 eq:5 |
3,866 | 450 | The Clusteron: Toward a Simple Abstraction for
a Complex Neuron
Bartlett W. Mel
Computation and Neural Systems
Division of Biology
Caltech, 216-76
Pasadena, CA 91125
[email protected]
Abstract
Are single neocortical neurons as powerful as multi-layered networks? A
recent compartmental modeling study has shown that voltage-dependent
membrane nonlinearities present in a complex dendritic tree can provide
a virtual layer of local nonlinear processing elements between synaptic inputs and the final output at the cell body, analogous to a hidden layer
in a multi-layer network. In this paper, an abstract model neuron is introduced, called a clusteron, which incorporates aspects of the dendritic
"cluster-sensitivity" phenomenon seen in these detailed biophysical modeling studies. It is shown, using a clusteron, that a Hebb-type learning
rule can be used to extract higher-order statistics from a set of training patterns, by manipulating the spatial ordering of synaptic connections
onto the dendritic tree. The potential neurobiological relevance of these
higher-order statistics for nonlinear pattern discrimination is then studied
within a full compartmental model of a neocortical pyramidal cell, using
a training set of 1000 high-dimensional sparse random patterns.
1
INTRODUCTION
The nature of information processing in complex dendritic trees has remained an
open question since the origin of the neuron doctrine 100 years ago. With respect
to learning, for example, it is not known whether a neuron is best modeled as
35
36
Mel
a pseudo-linear unit, equivalent in power to a simple Perceptron, or as a general
nonlinear learning device, equivalent in power to a multi-layered network. In an attempt to characterize the input-output behavior of a whole dendritic tree containing
voltage-dependent membrane mechanisms, a recent compartmental modeling study
in an anatomically reconstructed neocortical pyramidal cell (anatomical data from
Douglas et al., 1991; "NEURON" simulation package provided by Michael Hines
and John Moore) showed that a dendritic tree rich in NMDA-type synaptic channels is selectively responsive to spatially clustered, as opposed to diffuse, pattens
of synaptic activation (Mel, 1992). For example, 100 synapses which were simultaneously activated at 100 randomly chosen locations about the dendritic arbor were
less effective at firing the cell than 100 synapses activated in groups of 5, at each of
20 randomly chosen dendritic locations. The cooperativity among the synapses in
each group is due to the voltage dependence of the NMDA channel: Each activated
NMDA synapse becomes up to three times more effective at injecting synaptic current when the post-synaptic membrane is locally depolarized by 30-40 m V from the
resting potential. When synapses are activated in a group, the depolarizing effects
of each helps the others (and itself) to move into this more efficient voltage range.
This work suggested that the spatial ordering of afferent synaptic connections onto
the dendritic tree may be a crucial determinant of cell responses to specific input
patterns. The nonlinear interactions among neighboring synaptic inputs further lent
support to the idea that two or more afferents that form closely grouped synaptic
connections on a dendritic tree may be viewed as encoding higher-order input-space
"features" to which the dendrite is sensitive (Feldman & Ballard, 1982; Mel, 1990;
Durbin & Rumelhart, 1990). The more such higher-order features are present in
a given input pattern, the more the spatial distribution of active synapses will
be clustered, and hence the more the post-synaptic cell will be inclined to fire in
response. In a demonstration of this idea through direct manipulation of synaptic
ordering, dendritic cluster-sensitivity was shown to allow the model neocortical
pyramidal cell to reliably discriminate 50 training images of natural scenes from
untrained control images (see Mel, 1992). Since all presented patterns activated the
same number of synapses of the same strength, and with no systematic variation
in their dendritic locations, the underlying dendritic "discriminant function" was
necessarily nonlinear.
A crucial question remains as to whether other, e.g. non-synaptic, membrane nonlinearities, such as voltage-dependent calcium channels in the dendritic shaft membrane, could enhance, abolish, or otherwise alter the dendritic cluster-sensitivity
phenomenon seen in the NMDA-only case. Some of the simulations presented in
the remainder of this paper include voltage-dependent calcium channels and/or an
anomalous rectification in the dendritic membrane. However, detailed discussions
of these channels and their effects will be presented elsewhere.
2
2.1
THE CLUSTERON
MOTIVATION
This paper deals primarily with an important extension to the compartmental modeling experiments and the hand-tuned demonstrations of nonlinear pattern discrimi-
The Clusteron: Toward a Simple Abstraction for a Complex Neuron
=3
=2
Figure 1: The Clusteron. Active inputs lines are designated by arrows; shading of
synapses reflects synaptic activation ai when Xi E {O, 1} and weights are set to 1.
nation capacity presented in (Mel, 1992). If the manipulation of synaptic ordering is
necessary for neurons to make effective use of their cluster-sensitive dendrites, then
a learning mechanism capable of appropriately manipulating synaptic ordering must
also be present in these neurons. An abstract model neuron called a clusteron is
presented here, whose input-output relation was inspired by the idea of dendritic
cluster-sensitivity, and whose learning rule is a variant of simple Hebbian learning.
The clusteron is a far simpler and more convenient model for the study of clustersensitive learning than the full-scale compartmental model described in (Mel, 1992),
whose solutions under varying stimulus conditions are computed through numerical
integration of a system of several hundred coupled nonlinear differential equations
(Hines, 1989). However, once the basic mathematical and algorithmic issues have
been better understood, more biophysically detailed models of this type of learning
in dendritic trees, as has been reported in (Brown et al., 1990), will be needed.
2.2
INPUT-OUTPUT BEHAVIOR
The c1usteron is a particular second-order generalization of the thresholded linear
unit (TLU), exemplified by the common Perceptron. It consists of a "cell body"
where the globally thresholded output of the unit is computed, and a dendritic tree,
which for present purposes will be visualized as a single long branch attached to the
cell body (fig. 1). The dendritic tree receives a set of N weighted synaptic contacts
from a set of afferent "axons". All synaptic contacts are excitatory. The output of
the clusteron is given by
(1)
i=l
where ai is the net excitatory input at synapse i and g is a thresholding nonlinearity.
Unlike the TLU, in which the net input due to a single input line i is WiXi, the net
37
38
Mel
input at a clusteron synapse i with weight
ai
=
Wi
I::
WiXi(
is given by,
(2)
WjXj),
jE'D;
where Xi is the direct input stimulus intensity at synapse i, as for the TLU, and
Vi {i - r, ... i, ... , i + r} represents the neighborhood of radius r around synapse
i. It may be noted that the weight on each second-order term is constrained to
be the product of elemental weights WiWj, such that the clusteron has only N
underlying degrees of freedom as compared to N2 possible in a full second-order
model. For the simplest case of Xi E {O,l} and all weights set to 1, equation 2
says that the excitatory contribution of each active synapse is equal to the number
of coactive synapses within its neighborhood. A synapse that is activated alone
in its neighborhood thus provides a net excitatory input of ai
1; two synapses
activated near to each other each provide a net excitatory input of ai
aj = 2,
etc. The biophysical inspiration for the "multiplicative" relation in (2) is that,
the net injected current through a region of voltage-dependent dendritic membrane
can, under many circumstances, grow faster than linearly with increasing synaptic
input to that region. Unlike the dendritic membrane modeled at the biophysical
level, however, the clusteron in its current definition does not contain any saturating
nonlinearities in the dendrites.
=
=
2.3
=
THE LEARNING PROBLEM
The learning problem of present interest is that of two-category classification. A
pattern is a sparse N-element vector, where each component is a boolean random
variable equal to 1 with probability p, and 0 otherwise. Let T
{tl' t2, ... , tp} be
a training set consisting of P randomly chosen patterns. The goal of the classifier
0 to all other "control"
is to respond with y = 1 to any pattern in T, and y
patterns with the same average bit density p. Performance at this task is measured
by the probability of correct classification on a test set consisting of equal numbers
of training and control patterns.
=
=
2.4
THE LEARNING RULE
Learning in the clusteron is the process by which the ordering of synaptic connections onto the dendrite is manipulated. Second-order features that are statistically
prominent in the training set, i.e. pairs of pattern components that are coactivated
in the training set more often than average, can become encoded in the clusteron
as pairs of synaptic connections within the same dendritic neighborhood.
Learning proceeds as follows. Each pattern in T is presented once to the clusteron
in a random sequence, constituting one training epoch. At the completion of each
training epoch, each synapse i whose activation averaged over the training set
1
< ai >= P
p
I:: a~p)
p=l
falls below threshold (), is switched with another randomly chosen subthreshold
synapse. The threshold can, for example, be chosen as ()
L~l < aj >, i.e.
= }.;
The Clusteron: Toward a Simple Abstraction for a Complex Neuron
B
Figure 2: Distribution of 100 active synapses for a trained pattern (A) vs. a random
control pattern (B); synapse locations are designated by black dots. Layout A is
statistically more "clustery" than B, as evidenced by the presence of several clusters
of 5 or more active synapses not found in B. While the total synaptic conductance
activated in layout A was 20% less than that in layout B (linked to local variations in
input-resistance), layout A generated 5 spikes at the soma, while layout B generated
none.
the averaged synaptic activation across all synapses and training patterns. Each
synapse whose average activation exceeds threshold 0 is left undisturbed . Thus,
if a synapse is often coactivated with its neighbors during learning, its average
activation is high, and its connection is stabilized. If it is only rarely coactivated
with its neighbors during learning, it loses its current connection, and is given the
opportunity to stabilize a new connection at a new location.
The dynamics of clusteron learning may be caricatured as follows. At the start
of learning, each "poor performing" synaptic connection improves its average activation level when switched to a new dendritic location where, by definition, it is
expected to be an "average performer". The average global response y to training
patterns is thus also expected to increase during early training epochs. The average
response to random controls remains unchanged, however, since there is no systematic structure in the ordering of synaptic connections relevant to any untrained
pattern. This relative shift in the mean responses to training vs. control patterns
is the basis for discrimination between them. The learning process approaches its
asymptote as each pair of synapses switched, on average, disturbs the optimized
clusteron neighborhood structure as much as it improves it.
39
40
Mel
3
RESULTS
The clusteron learning rule leads to a permutation of synaptic input connections
having the property that the distribution of activated synapses in the dendritic
tree associated with the presentation of a typical training pattern is statistically
more "clustery" than the distribution of activated synapses associated with the
presentation of a random control pattern.
For a given training set size, however, it is crucial to establish that the clustery
distributions of active synapses associated with training patterns are in fact of a
type that can be reliably discriminated-within the detailed biophysical mode/from diffuse stimulation of the dendritic tree corresponding to unfamiliar stimulus
patterns. In order to investigate this question, a clusteron with 17,000 synapses was
trained with 1000 training patterns. This number of synapses was chosen in order
that a direct map exist between clusteron synapses and dendritic spines, which
were assumed to lie at 1 pm intervals along the approximately 17,000 pm of total
dendritic length of the model neocortical neuron (from Douglas et al., 1991). In
these runs, exactly 100 of the 17,000 bits were randomly set in each of the training
and control patterns, such that every pattern activated exactly 100 synapses. After
200 training epochs, 100 training patterns and 100 control patterns were selected as
a test set. For each test pattern, the locations of its 100 active clusteron synapses
were mapped onto the dendritic tree in the biophysical model by traversing the
latter in depth-first order. For example, training pattern #36 activated synapses
as shown in fig. 2A, with synapse locations indicated by black dots. The layout in
B was due to a control pattern. It may be perceived that layout A contains several
clear groupings of 5 or more synapses that are not observed in layout B.
Within in the biophysical model, the conductance of each synapse, containing both
NMDA and non-NMDA components, was scaled inversely with the input resistance
measured locally at the dendritic spine head. Membrane parameters were similar
to those used in (Mel, 1992); a high-threshold non-inactivating calcium conductance and an anomalous rectifier were used in these experiments as well, and were
uniformly distributed over most of the dendritic tree. In the simulation run for
each pattern, each of the 100 activated synapses was driven at 100 Hz for 100 ms,
asynchronously, and the number of action potentials generated at the soma was
counted. The total activated synaptic conductance in fig. 2A was 20% less than
that activated by control layout B. However, layout A generated 5 somatic spikes
while layout B generated none.
Fig. 3 shows the cell responses averaged over training patterns, four types of degraded training patterns, and control patterns. Most saliently, the average spike
count in response to a training pattern was 3 times the average response to a control pattern. Not surprisingly, degraded training patterns gave rise to degraded
responses. It is crucial to reiterate that all patterns, regardless of category, activated an identical number of synapses, with no average difference in their synaptic strengths or in dendritic eccentricity. Only the spatial distributions of active
synapses were different among categories.
The Clusteron: Toward a Simple Abstraction for a Complex Neuron
1000 Training Patterns
Training TIT
Patterns
T/T/T
20%
Noise
50% Control
Noise Patterns
Figure 3: Average cell responses to training patterns, degraded training patterns,
and control patterns. Categories designated T/T and T/T/T represented feature
composites of 2 or 3 training patterns, respectively. Degraded responses to these
categories of stimulus patterns was evidence for the underlying nonlinearity of the
dendritic discriminant function.
4
CONCLUSION
These experiments within the clusteron model neuron have shown that the assumption of (1) dendritic cluster-sensitivity, (2) a combinatorially rich interface structure
that allows every afferent axon potential access to many dendritic loci, and (3) a
local Hebb-type learning rule for stabilizing newly formed synapses, are sufficient in
principle to allow the learning of nonlinear input-ouput relations with a single dendritic tree. The massive rearrangement of synapses seen in these computational experiments is not strictly necessary; much of the work could be done instead through
standard Hebbian synaptic potentiation, if a larger set of post-synaptic neurons is
assumed to be available to each afferent instead of a single neuron as used here.
Architectural issues relevant to this issue have been discussed at length in (Mel,
1990; Mel & Koch, 1990).
An analysis of the storage capacity of the clusteron will be presented elsewhere.
Acknowledgements
This work was supported by the Office of Naval Research, the James McDonnell
Foundation, and National Institute of Mental Health. Thanks to Christof Koch for
providing an excellent working environment, Ken Miller for helpful discussions, and
to Rodney Douglas for discussions and use of his neurons.
41
42
Mel
References
Brown, T.H., Mainen, Z.F., Zador, A.M., & Claiborne, B.J. Self-organization of
hebbian synapses in hippocampal neurons. In Advances in Neural Information
Processing Systems, vol. 3, R . Lippmann, J. Moody, & D. Touretzky, (Eds.), Palo
Alto: Morgan Kauffman, 1991.
Douglas, R.J., Martin, K.A.C., & Whitteridge, D. An intracellular analysis of the
visual responses ofneurones in striate visual cortex. J. Physiol., 1991,440, 659-696.
Durbin, R. & Rumelhart, D.E. Product units: a computationally powerful and
biologically plausible extension to backpropagation networks. Neural Computation,
1989, 1, 133.
Feldman, J.A. & Ballard, D.H. Connectionist models and their properties. Cognitive
Science, 1982, 6, 205-254.
Hines, M. A program for simulation of nerve equations with branching geometries.
Int. J. Biomed. Comput., 1989, 24, 55-68.
Mel, B.W. The sigma-pi column: a model for associative learning in cerebral neocortex. CNS Memo #6, Computation and Neural Systems Program, California
Institute of Technology, 1990.
Mel, B.W. NMDA-based pattern classification in a modeled cortical neuron. 1992,
Neural Computation, in press.
Mel, B.W. & Koch, C. Sigma-pi learning: On radial basis functions and cortical
associative learning. In Advances in neural information processing systems, vol. 2,
D.S. Touretzsky, (Ed.), San Mateo, CA: Morgan Kaufmann, 1990.
| 450 |@word determinant:1 open:1 simulation:4 shading:1 contains:1 mainen:1 tuned:1 coactive:1 current:4 activation:7 must:1 john:1 physiol:1 numerical:1 asymptote:1 discrimination:2 alone:1 v:2 selected:1 device:1 mental:1 provides:1 location:8 simpler:1 mathematical:1 along:1 direct:3 differential:1 become:1 ouput:1 consists:1 expected:2 spine:2 behavior:2 multi:3 inspired:1 globally:1 tlu:3 increasing:1 becomes:1 provided:1 underlying:3 alto:1 pseudo:1 every:2 nation:1 exactly:2 classifier:1 scaled:1 control:15 unit:4 christof:1 inactivating:1 understood:1 local:3 encoding:1 clustery:3 firing:1 approximately:1 black:2 studied:1 mateo:1 range:1 statistically:3 averaged:3 backpropagation:1 composite:1 convenient:1 radial:1 onto:4 layered:2 storage:1 equivalent:2 map:1 layout:11 regardless:1 zador:1 stabilizing:1 rule:5 his:1 variation:2 analogous:1 massive:1 origin:1 element:2 rumelhart:2 observed:1 region:2 inclined:1 ordering:7 disturbs:1 environment:1 dynamic:1 trained:2 tit:1 division:1 basis:2 represented:1 effective:3 cooperativity:1 neighborhood:5 doctrine:1 whose:5 encoded:1 larger:1 plausible:1 say:1 compartmental:5 otherwise:2 statistic:2 itself:1 final:1 asynchronously:1 associative:2 sequence:1 biophysical:6 net:6 interaction:1 product:2 remainder:1 neighboring:1 relevant:2 elemental:1 cluster:7 eccentricity:1 help:1 completion:1 measured:2 radius:1 closely:1 correct:1 virtual:1 potentiation:1 clustered:2 generalization:1 dendritic:36 extension:2 strictly:1 around:1 koch:3 algorithmic:1 early:1 purpose:1 perceived:1 injecting:1 palo:1 sensitive:2 grouped:1 combinatorially:1 reflects:1 weighted:1 varying:1 voltage:7 office:1 naval:1 helpful:1 abstraction:4 dependent:5 pasadena:1 hidden:1 manipulating:2 relation:3 biomed:1 issue:3 among:3 classification:3 spatial:4 integration:1 constrained:1 equal:3 once:2 undisturbed:1 having:1 biology:1 represents:1 identical:1 patten:1 alter:1 others:1 stimulus:4 t2:1 connectionist:1 primarily:1 randomly:5 manipulated:1 simultaneously:1 national:1 geometry:1 consisting:2 cns:2 fire:1 attempt:1 freedom:1 conductance:4 rearrangement:1 interest:1 organization:1 investigate:1 activated:16 capable:1 necessary:2 traversing:1 tree:15 column:1 modeling:4 boolean:1 tp:1 hundred:1 characterize:1 reported:1 thanks:1 density:1 sensitivity:5 systematic:2 michael:1 enhance:1 moody:1 containing:2 opposed:1 cognitive:1 potential:4 nonlinearities:3 stabilize:1 int:1 afferent:5 vi:1 reiterate:1 multiplicative:1 linked:1 start:1 rodney:1 depolarizing:1 contribution:1 formed:1 abolish:1 degraded:5 kaufmann:1 miller:1 subthreshold:1 biophysically:1 none:2 ago:1 synapsis:29 touretzky:1 synaptic:29 ed:2 definition:2 james:1 associated:3 newly:1 wixi:2 improves:2 nmda:7 nerve:1 higher:4 response:12 synapse:14 done:1 lent:1 hand:1 receives:1 working:1 nonlinear:8 mode:1 aj:2 indicated:1 effect:2 brown:2 contain:1 hence:1 inspiration:1 spatially:1 moore:1 deal:1 during:3 self:1 branching:1 noted:1 mel:17 m:1 prominent:1 hippocampal:1 neocortical:5 wjxj:1 interface:1 image:2 common:1 stimulation:1 discriminated:1 attached:1 cerebral:1 discussed:1 resting:1 unfamiliar:1 feldman:2 ai:6 whitteridge:1 pm:2 nonlinearity:2 dot:2 access:1 cortex:1 etc:1 recent:2 showed:1 driven:1 manipulation:2 touretzsky:1 caltech:2 seen:3 morgan:2 performer:1 branch:1 full:3 hebbian:3 exceeds:1 faster:1 long:1 post:3 anomalous:2 variant:1 basic:1 circumstance:1 cell:11 interval:1 pyramidal:3 grow:1 crucial:4 appropriately:1 unlike:2 depolarized:1 hz:1 incorporates:1 near:1 presence:1 gave:1 idea:3 shift:1 whether:2 bartlett:1 resistance:2 action:1 saliently:1 detailed:4 clear:1 neocortex:1 locally:2 visualized:1 category:5 simplest:1 ken:1 exist:1 stabilized:1 anatomical:1 vol:2 group:3 four:1 soma:2 threshold:4 douglas:4 thresholded:2 year:1 run:2 package:1 powerful:2 injected:1 respond:1 architectural:1 bit:2 layer:3 durbin:2 strength:2 scene:1 diffuse:2 aspect:1 performing:1 martin:1 designated:3 mcdonnell:1 poor:1 membrane:9 across:1 wi:1 biologically:1 anatomically:1 rectification:1 equation:3 computationally:1 remains:2 count:1 mechanism:2 needed:1 locus:1 available:1 responsive:1 include:1 opportunity:1 establish:1 contact:2 unchanged:1 move:1 question:3 spike:3 dependence:1 striate:1 mapped:1 capacity:2 discriminant:2 toward:4 length:2 modeled:3 providing:1 demonstration:2 sigma:2 memo:1 rise:1 reliably:2 calcium:3 neuron:19 head:1 somatic:1 intensity:1 introduced:1 evidenced:1 pair:3 connection:11 optimized:1 california:1 suggested:1 proceeds:1 below:1 pattern:49 exemplified:1 kauffman:1 program:2 power:2 natural:1 technology:1 inversely:1 extract:1 coupled:1 health:1 epoch:4 acknowledgement:1 relative:1 permutation:1 foundation:1 switched:3 degree:1 sufficient:1 thresholding:1 principle:1 pi:2 elsewhere:2 excitatory:5 surprisingly:1 supported:1 allow:2 perceptron:2 institute:2 fall:1 neighbor:2 sparse:2 distributed:1 depth:1 cortical:2 rich:2 san:1 counted:1 far:1 constituting:1 reconstructed:1 lippmann:1 claiborne:1 neurobiological:1 global:1 active:8 assumed:2 xi:3 ballard:2 nature:1 channel:5 ca:2 dendrite:4 untrained:2 complex:6 necessarily:1 excellent:1 linearly:1 arrow:1 whole:1 motivation:1 noise:2 intracellular:1 n2:1 body:3 fig:4 je:1 tl:1 hebb:2 axon:2 shaft:1 comput:1 lie:1 remained:1 specific:1 rectifier:1 coactivated:3 evidence:1 grouping:1 clusteron:25 caricatured:1 visual:2 saturating:1 loses:1 hines:3 viewed:1 goal:1 presentation:2 typical:1 uniformly:1 called:2 total:3 discriminate:1 arbor:1 rarely:1 selectively:1 support:1 latter:1 relevance:1 phenomenon:2 |
3,867 | 4,500 | Dimensionality Dependent PAC-Bayes Margin Bound
Chi Jin
Key Laboratory of Machine Perception, MOE
School of Physics
Peking University
[email protected]
Liwei Wang
Key Laboratory of Machine Perception, MOE
School of EECS
Peking University
[email protected]
Abstract
Margin is one of the most important concepts in machine learning. Previous margin bounds, both for SVM and for boosting, are dimensionality independent. A
major advantage of this dimensionality independency is that it can explain the excellent performance of SVM whose feature spaces are often of high or infinite
dimension. In this paper we address the problem whether such dimensionality independency is intrinsic for the margin bounds. We prove a dimensionality dependent PAC-Bayes margin bound. The bound is monotone increasing with respect
to the dimension when keeping all other factors fixed. We show that our bound
is strictly sharper than a previously well-known PAC-Bayes margin bound if the
feature space is of finite dimension; and the two bounds tend to be equivalent as
the dimension goes to infinity. In addition, we show that the VC bound for linear
classifiers can be recovered from our bound under mild conditions. We conduct
extensive experiments on benchmark datasets and find that the new bound is useful for model selection and is usually significantly sharper than the dimensionality
independent PAC-Bayes margin bound as well as the VC bound for linear classifiers.
1 Introduction
Linear classifiers, including SVM and boosting, play an important role in machine learning. A central concept in the generalization analysis of linear classifiers is margin. There have been extensive
works on bounding the generalization errors of SVM and boosting in terms of margins (with various
definitions such l2 , l1 , soft, hard, average, minimum, etc.)
In 1970?s Vapnik pointed out that large margin can imply good generalization. Using the fatshattering dimension, Shawe-Taylor et al. [1] proved a margin bound for linear classifiers. This
bound was improved and simplified in a series of works [2, 3, 4, 5] mainly based on the PAC-Bayes
theory [6] which was developed originally for stochastic classifiers. (See Section 2 for a brief review
of the PAC-Bayes theory and the PAC-Bayes margin bounds.) All these bounds state that if a linear
classifier in the feature space induces large margins for most of the training examples, then it has a
small generalization error bound independent of the dimensionality of the feature space.
The (l1 ) margin has also been extensively studied for boosting to explain its generalization ability.
Schapire et al. [7] proved a margin bound for the generalization error of voting classifiers. The bound
is independent of the number of base classifiers combined in the voting classifier1 . This margin
bound was greatly improved in [8, 9] using (local) Rademacher complexities. There also exist
improved margin bounds for boosting from the viewpoint of PAC-Bayes theory [10], the diversity
of base classifiers [11], and different definition of margins [12, 13].
1
The bound depends on the VC dimension of the base hypothesis class. Nevertheless, given the VC dimension of the base hypothesis space, the bound does not depend on the number of the base classifiers, which can
be seen as the dimension of the feature space.
1
The aforementioned margin bounds are all dimensionality independent. That is, the bounds are
solely characterized by the margins on the training data and do not depend on the dimension of
feature space. A major advantage of such dimensionality independent margin bounds is that they
can explain the generalization ability of SVM and boosting whose feature spaces have high or infinite
dimension, in which case the standard VC bound becomes trivial.
Although very successful in bounding the generalization error, a natural question is whether this
dimensionality independency is intrinsic for margin bounds. In this paper we explore this problem.
Building upon the PAC-Bayes theory, we prove a dimensionality dependent margin bound. This
bound is monotone increasing with respect to the dimension when keeping all other factors fixed.
Comparing with the PAC-Bayes margin bound of Langford [4], the new bound is strictly sharper
when the feature space is of finite dimension; and the two bounds tend to be equal as the dimension
goes to infinity.
We conduct extensive experiments on benchmark datasets. The experimental results show that the
new bound is significantly sharper than the dimensionality independent PAC-Bayes margin bound
as well as the VC bound for linear classifiers on relatively large datasets. The bound is also found
useful for model selection.
The rest of this paper is organized as follows. Section 2 contains a brief review of the PAC-Bayes
theory and the dimensionality independent PAC-Bayes margin bound. In Section 3 we give the
dimensionality dependent PAC-Bayes margin bound and further improvements. We provide the
experimental results in Section 4, and conclude in Section 5. Due to the space limit, all the proofs
are given in the supplementary material.
2
Background
Let X be the instance space or generally the feature space. In this paper we always assume X = Rd .
We consider binary classification problems and let Y = {?1, 1}. Examples are drawn independently
according to an underlying distribution D over X ? Y. Let PD (A(x, y)) denote the probability of
event A when an example (x, y) is chosen according to D. Let S denote a training set of n i.i.d.
examples. We denote by PS (A(x, y)) the probability of event A when an example (x, y) is chosen
at random from S. Similarly we denote by ED and ES the corresponding expectations. If c is
a classifier, then we denote by erD (c) = PD (y ?= c(x)) the generalization error of c, and let
erS (c) = PS (y ?= c(x)) be the empirical error.
An important type of classifiers studied in this paper is stochastic classifiers. Let C be a set of
classifiers, and let Q be a probability distribution of classifiers on C. A stochastic classifier defined
by Q randomly selects c ? C according to Q. When clear from the context, we often denote by
erD (Q) and erS (Q) the generalization and empirical error of the stochastic classifier Q respectively.
That is,
erD (Q) = Ec?Q [erD (c)];
erS (Q) = Ec?Q [erS (c)]
A probability distribution Q of classifiers also defines a deterministic classifier?the voting classifier,
which we denote by vQ . For x ? X
vQ (x) = sgn[Ec?Q c(x)].
In this paper we always consider homogeneous linear classifiers2 , or stochastic classifiers whose
distribution is over homogeneous linear classifiers. Let X = Rd . For any w ? Rd , the linear
classifier cw is defined as cw (?) = sgn[< w, ? >]. When we consider a probability distribution over
all homogeneous linear classifiers cw in Rd , we can equivalently consider a distribution of w ? Rd .
The work in this paper is based on the PAC-Bayes theory. PAC-Bayes theory is a beautiful generalization of the classical PAC theory to the setting of Bayes learning. It gives generalization error
bounds for stochastic classifiers. The PAC-Bayes theorem was first proposed by McAllester [6].
The following elegant version is due to Langford [4].
2
This does not sacrifice any generality since linear classifiers can be easily transformed to homogeneous
linear classifiers by adding a new dimension.
2
Theorem 2.1. Let P , Q denote probability distributions of classifiers. For any P and any ? ? (0, 1),
with probability 1 ? ? over the random draw of n training examples
KL(Q||P ) + ln n+1
?
(1)
n
holds simultaneously for all distributions Q. Here KL(Q||P ) is the Kullback-Leibler divergence of
distributions Q and P ; kl(a||b) for a, b ? [0, 1] is the Bernoulli KL divergence defined as kl(a||b) =
a log ab + (1 ? a) log 1?a
1?b .
kl (erS (Q) || erD (Q)) ?
The above PAC-Bayes theorem states that if a stochastic classifier, whose distribution Q is close (in
the sense of KL divergence) to the fixed prior P , has a small training error, then its generalization
error is small.
PAC-Bayes theory has been improved and generalized in a series of works [5, 14]. For important
recent results please referred to [14]. [15] generalizes the KL divergence in the PAC-Bayes theorem
to arbitrary convex functions. [15, 16, 17, 18, 19] utilize improved PAC-Bayes bounds to develop
learning algorithms and perform model selections.
Very interestingly, it is shown in [2] that one can derive a margin bound for linear classifiers (including SVM) from the PAC-Bayes theorem quite easily. It is much simpler and slightly tighter than
previous margin bounds for SVM [1, 20]. The following simplified and refined version can be found
in [4].
? ) (? > 0, w
? ? Rd , ??
Theorem 2.2 ([4]). Let X = Rd . Let Q(?, w
w? = 1) denote the distribution of
homogeneous linear classifiers cw , where w ? N (??
w, I). For any ? ? (0, 1), with probability 1 ? ?
over the random draw of n training examples
? )) || erD (Q(?, w
? ))) ?
kl (erS (Q(?, w
?2
2
+ ln n+1
?
n
(2)
? ? Rd with ??
holds simultaneously for all ? > 0 and all w
w? = 1. In addition, the empirical error
of the stochastic classifier can be written as
? )) = ES ?(??(?
erS (Q(?, w
w; x, y)),
? ; and
is the margin of (x, y) with respect to the unit vector w
? ?
2
1
? e?? /2 d?
?(t) = 1 ? ?(t) =
2?
t
is the probability of the upper tail of Gaussian distribution.
where ?(?
w; x, y) =
(3)
w,x>
y <??x?
(4)
? ? Rd inducing large margins for most
According to Theorem 2.2, if there is a linear classifier w
? x, y) is large for most (x, y) , then choosing a relatively small ? would
training examples, i.e., ?(w;
? and in turn a small upper bound for the generalization error of the
yield a small erS (Q(?, w))
? Note that this bound does not depend on the dimensionality d. In fact
stochastic classifier Q(?, w).
almost all previously known margin bounds are dimensionality independent3 .
PAC-Bayes theory only provides bounds for stochastic classifiers. In practice however, users often
prefer deterministic classifiers. There is a close relation between the error of a stochastic classifier
defined by distribution Q and the error of the deterministic voting classifier vQ . The following
simple result is well-known.
Proposition 2.3. Let vQ be the voting classifier defined by distribution Q. That is, vQ (?) =
sgn[Ec?Q c(?)]. Then for any Q
erD (vQ ) ? 2 erD (Q).
(5)
Combining Theorem 2.2 and Proposition 2.3, one can upper bound the generalization error of the
? given in Theorem 2.2. In fact, it is easy to see that
voting classifier vQ associated with Q(?, w)
? Thus
vQ = cw? , the voting classifier is exactly the linear classifier w.
?
erD (cw? ) ? 2erD (Q(?, w)).
(6)
3
There exist dimensionality dependent margin bounds [21]. However these bounds grow unboundedly as
the dimensionality tends to infinity.
3
From Theorem 2.2, Proposition 2.3 and (6), we have that with probability 1?? the following margin
? ? Rd , ?w?
? = 1 and all ? > 0:
bound holds for all classifiers cw? with w
(
)
?2
+ ln n+1
erD (cw? )
?
? ||
kl erS (Q(?, w))
? 2
.
(7)
2
n
One disadvantage of the bounds in (5), (6) and (7) is that they involve a multiplicative factor of 2.
In general, the factor 2 cannot be improved. However for linear classifiers with large margins there
can exist tighter bounds. The following is a slightly refined version of the bounds given in [2, 3].
? ) and vQ = cw? be defined as above. Let erD,? (Q(?, w
? )) =
Proposition 2.4(([2, 3]). Let)Q(?, w
Ew?N (??w,I) PD y <w,x>
?x? ? ? be the error of the stochastic classifier with margin ?. Then for all
??0
? )) + ?(?).
(8)
erD (cw? ) ? erD,? (Q(?, w
The bound states that if the stochastic classifier induces small errors with large margin ?, then the
linear (voting) classifier has only a slightly larger generalization error than the stochastic classifier.
However sometimes (8) can be larger than (5). The two bounds have a different regime in which
they dominate [2]. It is also worth pointing out that the margin y <w,x>
?x? considered in Proposition
2.4 is unnormalized with respect to w. See Section 3 for more discussions.
? by its empirical version
To apply Proposition 2.4, one needs
erD,? (Q(?, w))
)
( to further bound
<w,x>
<?
w,x>
? := Ew?N (??w,I) PS y ?x? ? ? = ES ?(?y ?x? ? ?). With slight modifierS,? (Q(?, w))
cations of Theorem 2.2, one can show that for any ? ? 0 with probability 1 ? ? the following bound
? uniformly:
is valid for all ? and w
? || erD,? (Q(?, w)))
?
kl (erS,? (Q(?, w))
?
?2
2
+ ln n+1
?
.
n
(9)
The following Proposition combines the above results.
Proposition 2.5. For any ? ? 0 and any ? > 0 with probability 1 ? ? the following bound is valid
? uniformly:
for all ? and w
(
)
? )) || erD (cw? ) ? ?(?)) ?
kl erS,? (Q(?, w
?2
2
+ ln n+1
?
.
n
(10)
Note that this last bound is not uniform for ?, see also [3].
Improving the multiplicative factor was also studied in [22, 17], in which the variance of the stochastic classifier is also bounded by PAC-Bayes theorem, and Chebyshev inequality can be used.
3 Theoretical Results
In this section we give the theoretical results. The main result of this paper is Theorem 3.1, which
provides a dimensionality dependent PAC-Bayes margin bound.
? ) (? > 0, w
? ? Rd , ??
Theorem 3.1. Let Q(?, w
w? = 1) denote the distribution of linear classifiers
cw (?) = sgn[< w, ? >], where w ? N (??
w, I). For any ? ? (0, 1), with probability 1 ? ? over the
random draw of n training examples
? )) || erD (Q(?, w
? ))) ?
kl (erS (Q(?, w
d
2
ln(1 +
?2
d )
n
+ ln n+1
?
(11)
? ? Rd with ??
? )) =
holds simultaneously for all ? > 0 and all w
w? = 1. Here erS (Q(?, w
<?
w,x>
ES ?(??(?
w; x, y)) and ?(?
w; x, y) = y ?x? are the same as in Theorem 2.2.
Comparing Theorem 3.1 with Theorem 2.2, it is easy to see the following Proposition holds.
Proposition 3.2. The bound (11) is sharper than (2) for any d < ?, and the two bounds tend to be
equivalent as d ? ?.
4
Theorem 3.1 is the first dimensionality dependent margin bound that remains nontrivial in infinite
dimension.
Theorem 3.1 and Theorem 2.2 are uniform bounds for ?. Thus one can choose appropriate ? to op? in the LHS of the two bounds is monotone
timize each bound respectively. Note that erS (Q(?, w))
decreasing with respect to ?. Comparing to Theorem 2.2, Theorem 3.1 has the advantage that its
RHS scales only in O(ln ?) rather than O(?2 ), and therefore allows choosing a very large ?.
As described in (7) in Section 2, we can also obtain a margin bound for the deterministic linear
?
classifier cw? by combining (11) with erD (cw? ) ? 2 erD (Q(?, w)).
In addition, note that the VC dimension of homogeneous linear classifiers in Rd is d. From Theorem
3.1 we can almost recover the VC bound [23]
? (
( ))
+ ln 4?
d 1 + ln 2n
d
(12)
erD (c) ? erS (c) +
n
for homogenous linear classifiers in Rd under mild conditions. Formally we have the following
Corollary.
Corollary 3.3. Theorem 3.1 implies the following result. Suppose n > 5. For any ? > 2e? 8 n? 8 ,
with probability 1 ? ? over the random draw of n training examples
?
(
( )) 1 2(n+1) ?
d ln 1 + 2n
+ 2 ln ?
d + ln n
d
+
(13)
erD (cw ) ? erS (cw ) +
n
n
d
holds simultaneously for all homogeneous linear classifiers cw with w ? Rd satisfying
?
(
)
< w, x > (ln n)1/2 d3/2
1 d + ln n
PD y
?
.
?
?w??x?
4n2
4
n
1
(14)
Condition (14) is easy to satisfy if d ? n.
In a sense, the dimensionality dependent margin bound in Theorem 3.1 unifies the dimensionality
independent margin bound and the VC bound for linear classifiers.
Although it is not easy to theoretically quantify how much sharper (11) is than (2) and the VC bound
(12) (because the first two bounds hold uniformly for all ?), in Section 4 we will demonstrate by
experiments that the new bound is usually significantly better than (2) and (12) on relatively large
datasets.
3.1
Improving the Multiplicative Factor
As we mentioned in Section 2, Proposition 2.3 involves a multiplicative factor of 2 when bounding
the error of the deterministic voting classifier by the error of the stochastic classifier. Note that in
? cannot be improved (consider the case that with probability one
general erD (cw? ) ? 2erD (Q(?, w))
? Here we study how to improve it for large margin
the data has zero margin with respect to w).
classifiers.
? + ?(?), which bounds the generRecall that Proposition 2.4 gives erD (cw? ) ? erD,? (Q(?, w))
alization error of the linear classifier in terms of the error of the stochastic classifier with margin ? ? 0. As pointed out in [2], this bound is not always better than Proposition 2.3 (i.e.,
?
erD (cw? ) ? 2erD (Q(?, w))).
The two bounds each has a different dominant regime. Our first result
in this subsection is the following simple improvement over both Proposition 2.3 and Proposition
2.4.
Proposition 3.4. Using the notions in Proposition 2.4, we have that for all ? ? 0,
erD (cw? ) ?
1
? )),
erD,? (Q(?, w
?(?)
where ?(?) is defined in Theorem 2.2.
5
(15)
It is easy to see that Proposition 2.3 is a special case of Proposition 3.4: just let ? = 0 in (15) we
recover (6). Thus Proposition 3.4 is always sharper than Proposition 2.3. It is also easy to show that
(15) is sharper than (8) in Proposition 2.4 whenever the bounds are nontrivial. Formally we have the
following proposition.
Proposition 3.5. Suppose the RHS of (8) or the RHS of (15) is smaller than 1, i.e., at least one of
the two bounds is nontrivial. Then (15) is sharper than (8).
As mentioned in Section 2, the margins discussed so far in this subsection are unnormalized with
respect to w ? Rd . That is, we consider y <w,x>
?x? . In the following we will focus on normalized
<w,x>
margins y ?w??x? . It will soon be clear that this brings additional benefits when combining with the
dimensionality dependent margin bound.
<w,x>
N
? = Ew?N (??w,I) PD (y ?w??x?
Let erD,?
(Q(?, w))
? ?) be the true error of the stochastic classifier
N
? with normalized margin ? ? [?1, 1]. Also let erS,?
? be its empirical version. We
Q(?, w)
(Q(?, w))
have the following lemma.
? ? Rd with ??
Lemma 3.6. For any ? > 0, any w
w? = 1 and any ? ? 0,
erD (cw? ) ?
N
? ))
erD,?
(Q(?, w
.
?(??)
(16)
er N (Q)
D,?
N
If erD,?
(Q) is only slightly larger than erD (Q) for a not-too-small ? > 0, then ?(??)
can be
much smaller than 2erD (Q) even with a not too large ?. Also note that setting ? = 0 in (16), we
can recover (6).
N
The true margin error erD,?
(Q) can be bounded by its empirical version similar to Theorem 3.1: For
any ? ? 0 and any ? > 0, with probability 1 ? ?
( N
)
N
?
?
kl erS,?
(Q(?, w))||er
?
D,? (Q(?, w))
d
2
ln(1 +
?2
d )
+ ln n+1
?
n
(17)
? ? Rd with ?w?
? = 1.
holds simultaneously for all ? > 0 and w
Combining the previous two results we have a dimensionality dependent margin bound for the linear
classifier cw? .
? ) defined as before. For any ? ? 0 and any ? > 0, with probability
Proposition 3.7. Let Q(?, w
1 ? ? over the random draw of n training examples
kl
(
N
? ))||erD (cw? )?(??)
erS,?
(Q(?, w
)
?
d
2
ln(1 +
?2
d )
n
+ ln n+1
?
(18)
? ? Rd with ??
holds simultaneously for all ? > 0 and w
w? = 1.
To see how Proposition 3.7 improves the multiplicative factor, let?s take a closer look at the bound
<w,x>
N
? = Ew?N (??w,I) PD (y ?w??x?
(18). Observe that as ? getting large, erS,?
(Q(?, w))
? ?) tends to the
)
(
<?
w,x>
?
? with margin ?, i.e., PS y ?x? ? ? (recall that ?w?=1).
empirical error of the linear classifier w
Also if ?? > 3, ?(??) ? 1. Taking into the consideration that the RHS of (18) scales only in
O(ln ?), we can choose a relatively large ? and (18) gives a dimensionality dependent margin bound
whose multiplicative factor can be very close to 1.
4
Experiments
In this section we conduct a series of experiments on benchmark datasets. The goal is to see to
what extent the Dimensionality Dependent margin bound (will be referred to as DD-margin bound)
is sharper than the Dimensionality Independent margin bound (will be referred to as DI-margin
bound) as well as the VC bound. More importantly, we want to see from the experiments how
useful the DD-margin bound is for model selection.
6
Table 1: Description of dataset
Dataset
# Examples
# Features
Dataset
# Examples
# Features
Image
Magic04
Optdigits
Pendigits
BreastCancer
Pima
2310
19020
5620
10992
683
768
20
10
64
16
9
8
Letter
Mushroom
PageBlock
Waveform
Glass
wdbc
20000
8124
5473
3304
214
569
16
22
10
21
9
30
We use 12 datasets all from the UCI repository [24]. A description of the datasets is given in Table
1. For each dataset, we use 5-fold cross validation and average the results over 10 runs (for a total
50 runs). If the dataset is a multiclass problem, we group the data into two classes since we study
binary classification problems. In the data preprocessing stage each feature is normalized to [0, 1].
To compare the bounds and to do model selection, we use SVM with polynomial kernels K(x, x? ) =
t
(a < x, x? > +b) and let t varies4 . For each t, we train a classifier by libsvm [25]. We plot the
values of the three bounds?the DD-margin bound, the DI-margin bound, the VC bound (12) as
well as the test and training error (see Figure 1 - Figure 12). For the two margin bounds, since they
hold uniformly for ? > 0, we select the optimal ? to make the bounds as small as possible. For
simplicity, we combine Proposition 2.3 with Theorem 3.1 and Theorem 2.2 respectively to obtain
the final bound for the generalization error of the deterministic linear classifiers. In each figure, the
horizonal axis represents the degree t of the polynomial kernel. All bounds in the figures (including
training and test error) are for deterministic (voting) classifier.
To analyze the experimental results, we group the 12 results into two categories as follows.
1. Figure 1 - Figure 8. This category consists of eight datasets, and each of them contains
at least 2000 examples (relatively large datasets). On all these datasets, the DD-margin
bounds are significantly sharper than the DI-margin bounds as well as the VC bounds. More
importantly, the DD-margin bounds work well for model selection. We can use this bound
to choose the degree of the polynomial kernel. On all the datasets except ?Image?, the curve
of the DD-margin bound is highly correlated with the curve of the test error: When the test
error decreases (or increases), the DD-margin bound also decreases (or increases); And as
the test error remains unchanged as the degree t grows, the DD-margin bound selects the
model with the lowest complexity.
2. Figure 9 - Figure 12. This category consists of four small datasets, each contains less than
1000 examples. On these small datasets, the VC bounds often become trivial (larger than
1). The DD-margin bounds are still always, but less significantly, sharper than the DImargin bounds. However, on these small datasets, it is difficult to tell if the bounds select
good models.
In sum, the experimental results demonstrate that the DD-margin bound is usually significantly
sharper than the DI-margin bound as well as the VC bound if the dataset is relatively large. Also the
DD-margin bound is useful for model selection. However, for small datasets, all three bounds seem
not useful for practical purpose.
5 Conclusion
In this paper we study the problem whether dimensionality independency is intrinsic for margin
bounds. We prove a dimensionality dependent PAC-Bayes margin bound. This bound is sharper
than a previously well-known dimensionality independent margin bound when the feature space is of
finite dimension; and they tend to be equivalent as the dimensionality grows to infinity. Experimental
results demonstrate that for relatively large datasets the new bound is often useful for model selection
and significantly sharper than previous margin bound as well as the VC bound.
4
For simplicity we fix a and b as constants in all the experiments.
7
1.6
1.2
1.6
DD?margin
DI?margin
VC
train error
test error
1.4
1.2
1.4
1.2
1
error
0.8
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
2
4
6
8
10
0
0
12
2
4
6
t
DD?margin
DI?margin
VC
train error
test error
0.8
1.6
1.4
1.4
1.2
1.2
0.4
0.4
0.2
0.2
6
8
10
0
0
12
DD?margin
DI?margin
VC
train error
test error
1.4
1.2
4
6
8
10
0
0
12
0.8
DD?margin
DI?margin
VC
train error
test error
0.4
0.2
0.2
0.2
10
12
2
4
6
t
8
10
0
0
12
Figure 7: Pendigits
1.2
0.8
1.6
1.4
1.4
1.2
1.2
0.6
0.4
0.4
0.2
0.2
8
10
12
0
0
10
12
DD?margin
DI?margin
VC
train error
test error
0.8
0.6
0.4
0.2
2
4
6
t
Figure 10: Glass
8
1
DD?margin
DI?margin
VC
train error
test error
0.8
0.6
6
6
Figure 9: BreastCancer
1.6
1
error
1
4
4
t
Figure 8: Waveform
DD?margin
DI?margin
VC
train error
test error
1.4
2
2
t
1.6
0
0
DD?margin
DI?margin
VC
train error
test error
1
0.6
8
12
0.8
0.4
6
10
1.2
0.8
0
0
8
1.4
0.6
4
6
Figure 6: PageBlocks
0.4
2
4
1.6
0.6
0
0
2
t
1
error
1
error
0.4
error
1.2
0.6
1.6
DD?margin
DI?margin
VC
train error
test error
12
0.8
Figure 5: Optdigits
1.6
10
DD?margin
DI?margin
VC
train error
test error
t
Figure 4: Mushroom
8
0.2
2
t
1.4
6
1
0.8
0.6
4
4
Figure 3: Magic04
1.6
0.6
2
2
t
1
error
error
1
0
0
0
0
12
error
1.2
10
Figure 2: Letter
1.6
1.4
8
t
Figure 1: Image
error
0.8
0.6
0
0
DD?margin
DI?margin
VC
train error
test error
1
0.8
error
error
1
1.6
DD?margin
DI?margin
VC
train error
test error
error
1.4
8
t
Figure 11: Pima
10
12
0
0
2
4
6
8
10
12
t
Figure 12: wdbc
Our work is based on the PAC-Bayes theory. One limitation is that it involves a multiplicative factor
of 2 when transforming stochastic classifiers to deterministic classifiers. Although we provide two
improved bounds (Proposition 3.4, 3.7) over previous results (Proposition 2.3, 2.4), the multiplicative factor is still strictly larger than 1. A future work is to study whether there exist dimensionality
dependent margin bounds (not necessarily PAC-Bayes) without this multiplicative factor.
Acknowledgments
This work was supported by NSFC(61222307, 61075003) and a grant from Microsoft Research
Asia. We also thank Chicheng Zhang for very helpful discussions.
8
References
[1] John Shawe-Taylor, Peter L. Bartlett, Robert C. Williamson, and Martin Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926?1940,
1998.
[2] John Langford and John Shawe-Taylor. PAC-Bayes & Margins. In Advances in Neural Information
Processing Systems, pages 423?430, 2002.
[3] David A. McAllester. Simplified PAC-Bayesian margin bounds. Learning Theory and Kernel Machines,
2777:203?215, 2003.
[4] John Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning
Research, 6:273?306, 2005.
[5] Matthias Seeger. PAC-Bayesian generalization error bounds for Gaussian process classification. Journal
of Machine Learning Research, 3:233?269, 2002.
[6] David A. McAllester. Some PAC-Bayesian theorems. Machine Learning, 37(3):355?363, 1999.
[7] Robert E. Schapire, Yoav Freund, Peter Barlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5):1651?1686, 1998.
[8] Vladimir Koltchinskii and Dmitry Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Annals of Statistics, 30:1?50, 2002.
[9] Vladimir Koltchinskii and Dmitry Panchenko. Complexities of convex combinations and bounding the
generalization error in classification. Annals of Statistics, 33:1455?1496, 2005.
[10] John Langford, Matthias Seeger, and Nimrod Megiddo. An improved predictive accuracy bound for
averaging classifiers. In International Conference on Machine Learning, pages 290?297, 2001.
[11] Sanjoy Dasgupta and Philip M. Long. Boosting with diverse base classifiers. In Annual Conference on
Learning Theory, pages 273?287, 2003.
[12] Leo Breiman. Prediction games and arcing algorithms. Neural Computation, 11:1493?1518, 1999.
[13] Liwei Wang, Masashi Sugiyama, Zhaoxiang Jing, Cheng Yang, Zhi-Hua Zhou, and Jufu Feng. A refined
margin analysis for boosting algorithms via equilibrium margin. Journal of Machine Learning Research,
12:1835?1863, 2011.
[14] Olivier Catoni. PAC-Bayesian supervised classification: The thermodynamics of statistical learning. IMS
Lecture Notes?Monograph Series, 56, 2007.
[15] Pascal Germain, Alexandre Lacasse, Franc?ois Laviolette, and Mario Marchand. PAC-Bayesian learning
of linear classifiers. In International Conference on Machine Learning, page 45, 2009.
[16] Pascal Germain, Alexandre Lacasse, Franc?ois Laviolette, Mario Marchand, and Sara Shanian. From
PAC-Bayes bounds to KL regularization. In Advances in Neural Information Processing Systems, pages
603?610, 2009.
[17] Jean-Francis Roy, Franc?ois Laviolette, and Mario Marchand. From PAC-Bayes bounds to quadratic programs for majority votes. In International Conference on Machine Learning, pages 649?656, 2011.
[18] Amiran Ambroladze, Emilio Parrado-Hern?andez, and John Shawe-Taylor. Tighter pac-bayes bounds. In
Advances in Neural Information Processing Systems, pages 9?16, 2006.
[19] John Shawe-Taylor, Emilio Parrado-Hern?andez, and Amiran Ambroladze. Data dependent priors in PACBayes bounds. In International Conference on Computational Statistics, pages 231?240, 2010.
[20] Peter L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the
weights is more important than the size of the network. IEEE Transactions on Information Theory,
44(2):525?536, 1998.
[21] Ralf Herbrich and Thore Graepel. A PAC-Bayesian margin bound for linear classifiers. IEEE Transactions
on Information Theory, 48(12):3140?3150, 2002.
[22] Alexandre Lacasse, Franc?ois Laviolette, Mario Marchand, Pascal Germain, and Nicolas Usunier. PACBayes bounds for the risk of the majority vote and the variance of the gibbs classifier. In Advances in
Neural Information Processing Systems, pages 769?776, 2006.
[23] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
[24] Andrew Frank and Arthur Asuncion. UCI machine learning repository, 2010.
[25] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1?27:27, 2011.
9
| 4500 |@word mild:2 repository:2 version:6 polynomial:3 series:4 contains:3 interestingly:1 recovered:1 com:1 comparing:3 magic04:2 gmail:1 mushroom:2 written:1 john:7 plot:1 provides:2 boosting:9 herbrich:1 simpler:1 zhang:1 become:1 prove:3 consists:2 combine:2 interscience:1 theoretically:1 sacrifice:1 chi:1 decreasing:1 zhi:1 increasing:2 becomes:1 underlying:1 bounded:2 lowest:1 what:1 jufu:1 developed:1 masashi:1 voting:11 megiddo:1 exactly:1 classifier:76 unit:1 grant:1 before:1 local:1 tends:2 limit:1 nsfc:1 solely:1 pendigits:2 koltchinskii:2 studied:3 sara:1 practical:2 acknowledgment:1 practice:1 empirical:8 significantly:7 liwei:2 cannot:2 close:3 selection:8 breastcancer:2 context:1 risk:2 equivalent:3 deterministic:8 go:2 independently:1 convex:2 simplicity:2 importantly:2 dominate:1 ralf:1 notion:1 annals:3 hierarchy:1 play:1 suppose:2 user:1 olivier:1 homogeneous:7 hypothesis:2 roy:1 satisfying:1 role:1 wang:2 sun:1 decrease:2 mentioned:2 monograph:1 pd:6 transforming:1 complexity:4 panchenko:2 depend:3 predictive:1 upon:1 easily:2 various:1 leo:1 train:13 tell:1 shanian:1 choosing:2 refined:3 whose:5 quite:1 supplementary:1 larger:5 jean:1 ability:2 statistic:4 final:1 advantage:3 matthias:2 uci:2 combining:4 description:2 inducing:1 getting:1 p:4 unboundedly:1 rademacher:1 jing:1 derive:1 develop:1 andrew:1 op:1 school:2 ois:4 involves:2 implies:1 quantify:1 waveform:2 stochastic:19 vc:28 pacbayes:2 sgn:4 mcallester:3 material:1 fix:1 generalization:20 andez:2 proposition:29 tighter:3 strictly:3 hold:10 considered:1 equilibrium:1 pointing:1 major:2 purpose:1 minimization:1 always:5 gaussian:2 rather:1 zhou:1 breiman:1 corollary:2 arcing:1 focus:1 improvement:2 bernoulli:1 mainly:1 greatly:1 pageblocks:1 seeger:2 sense:2 glass:2 helpful:1 dependent:16 relation:1 transformed:1 selects:2 wanglw:1 aforementioned:1 classification:7 pascal:3 special:1 homogenous:1 equal:1 represents:1 look:1 future:1 intelligent:1 franc:4 randomly:1 wee:1 simultaneously:6 divergence:4 microsoft:1 ab:1 highly:1 closer:1 arthur:1 lh:1 conduct:3 pku:1 taylor:5 theoretical:2 instance:1 soft:1 disadvantage:1 yoav:1 uniform:2 successful:1 too:2 eec:1 combined:2 international:4 lee:1 physic:1 central:1 choose:3 chung:1 diversity:1 satisfy:1 pageblock:1 fatshattering:1 depends:1 multiplicative:9 analyze:1 mario:4 francis:1 bayes:33 recover:3 asuncion:1 chicheng:1 accuracy:1 variance:2 yield:1 bayesian:6 unifies:1 worth:1 cation:1 explain:3 whenever:1 ed:1 definition:2 proof:1 associated:1 di:16 proved:2 dataset:6 recall:1 subsection:2 dimensionality:32 improves:1 organized:1 graepel:1 alexandre:3 originally:1 supervised:1 asia:1 improved:9 erd:38 generality:1 just:1 stage:1 langford:5 defines:1 brings:1 grows:2 thore:1 building:1 concept:2 normalized:3 true:2 regularization:1 laboratory:2 leibler:1 game:1 please:1 unnormalized:2 generalized:1 demonstrate:3 l1:2 image:3 consideration:1 tail:1 slight:1 discussed:1 ims:1 gibbs:1 rd:19 similarly:1 pointed:2 sugiyama:1 shawe:5 etc:1 base:6 dominant:1 recent:1 inequality:1 binary:2 seen:1 minimum:1 additional:1 emilio:2 characterized:1 cross:1 long:1 lin:1 amiran:2 peking:2 prediction:2 expectation:1 sometimes:1 kernel:4 addition:3 background:1 want:1 grow:1 rest:1 tend:4 elegant:1 seem:1 effectiveness:1 structural:1 yang:1 easy:6 cn:1 multiclass:1 chebyshev:1 whether:4 bartlett:2 peter:3 useful:6 generally:1 clear:2 involve:1 extensively:1 induces:2 category:3 nimrod:1 schapire:2 exist:4 tutorial:1 diverse:1 dasgupta:1 group:2 key:2 independency:4 four:1 nevertheless:1 drawn:1 d3:1 libsvm:2 utilize:1 monotone:3 sum:1 run:2 letter:2 almost:2 barlett:1 chih:2 draw:5 prefer:1 modifier:1 bound:135 horizonal:1 cheng:1 fold:1 marchand:4 quadratic:1 annual:1 nontrivial:3 infinity:4 relatively:7 martin:1 according:4 combination:1 smaller:2 slightly:4 ln:20 vq:9 previously:3 remains:2 turn:1 hern:2 alization:1 ambroladze:2 generalizes:1 usunier:1 apply:1 observe:1 eight:1 appropriate:1 laviolette:4 classical:1 unchanged:1 feng:1 question:1 cw:24 thank:1 philip:1 majority:2 extent:1 trivial:2 vladimir:3 equivalently:1 difficult:1 robert:2 sharper:15 pima:2 frank:1 perform:1 upper:3 datasets:16 benchmark:3 finite:3 lacasse:3 jin:1 classifier1:1 arbitrary:1 david:2 germain:3 moe:2 kl:16 extensive:3 address:1 usually:3 perception:2 pattern:1 regime:2 program:1 including:3 explanation:1 event:2 natural:1 beautiful:1 thermodynamics:1 improve:1 technology:1 brief:2 imply:1 library:1 axis:1 review:2 prior:2 l2:1 freund:1 lecture:1 limitation:1 validation:1 degree:3 dd:23 viewpoint:1 classifiers2:1 supported:1 last:1 keeping:2 soon:1 taking:1 benefit:1 curve:2 dimension:17 valid:2 preprocessing:1 simplified:3 ec:4 far:1 transaction:4 kullback:1 dmitry:2 conclude:1 parrado:2 table:2 nicolas:1 improving:2 williamson:1 excellent:1 necessarily:1 anthony:1 main:1 rh:4 bounding:5 n2:1 referred:3 wiley:1 theorem:30 jen:1 pac:39 er:22 svm:8 intrinsic:3 vapnik:2 adding:1 ci:1 catoni:1 margin:108 wdbc:2 explore:1 chang:1 hua:1 acm:1 goal:1 optdigits:2 hard:1 infinite:3 except:1 uniformly:4 averaging:1 lemma:2 total:1 sanjoy:1 experimental:5 e:4 vote:2 ew:4 formally:2 select:2 support:1 correlated:1 |
3,868 | 4,501 | Simultaneously Leveraging Output and Task
Structures for Multiple-Output Regression
Piyush Rai?
Dept. of Computer Science
University of Texas at Austin
Austin, TX
[email protected]
Abhishek Kumar?
Dept. of Computer Science
University of Maryland
College Park, MD
[email protected]
Hal Daum?e III
Dept. of Computer Science
University of Maryland
College Park, MD
[email protected]
Abstract
Multiple-output regression models require estimating multiple parameters, one for
each output. Structural regularization is usually employed to improve parameter
estimation in such models. In this paper, we present a multiple-output regression
model that leverages the covariance structure of the latent model parameters as
well as the conditional covariance structure of the observed outputs. This is in
contrast with existing methods that usually take into account only one of these
structures. More importantly, unlike some of the other existing methods, none of
these structures need be known a priori in our model, and are learned from the
data. Several previously proposed structural regularization based multiple-output
regression models turn out to be special cases of our model. Moreover, in addition
to being a rich model for multiple-output regression, our model can also be used in
estimating the graphical model structure of a set of variables (multivariate outputs)
conditioned on another set of variables (inputs). Experimental results on both
synthetic and real datasets demonstrate the effectiveness of our method.
1
Introduction
Multivariate response prediction, also known as multiple-output regression [3] when the responses
are real-valued vectors, is an important problem in machine learning and statistics. The goal in
multiple-output regression is to learn a model for predicting K > 1 real-valued responses (the
output) from D predictors or features (the input), given a training dataset consisting of N inputoutput pairs. Multiple-output prediction is also an instance of the problem of multitask learning [5,
10] where predicting each output is a task and all the tasks share the same input data. Multipleoutput regression problems are encountered frequently in various application domains. For example,
in computational biology [11], we often want to predict the gene-expression levels of multiple genes
based on a set of single nucleotide polymorphisms (SNPs); in econometrics [17], we often want to
predict the stock prices in the future using relevant macro-economic variables and stock prices in the
past as inputs; in geostatistics, we are often interested in jointly predicting the concentration levels
of different heavy metal pollutants [9]; and so on.
One distinguishing aspect of multiple-output regression is that the outputs are often related to each
other via some underlying (and often a priori unknown) structure. A part of this can be captured by
the imposing a relatedness structure among the regression coefficients (e.g., the weight vectors in a
linear regression model) of all the outputs. We refer to the relatedness structure among the regression
coefficients as task structure. However, there can still be some structure left in the outputs that is not
explained by the regression coefficients alone. This can be due to a limited expressive power of our
chosen hypothesis class (e.g., linear predictors considered in this paper). The residual structure that
is left out when conditioned on inputs will be referred to as output structure here. This can be also be
seen as the covariance structure in the output noise. It is therefore desirable to simultaneously learn
?
Contributed equally
1
and leverage both the output structure and the task structure in multiple-output regression models
for improved parameter estimation and prediction accuracy.
Although some of the existing multiple-output regression models have attempted to incorporate such
structures [17, 11, 13], most of these models are restrictive in the sense that (1) they usually exploit
only one of the two structures (output structure or task structure, but not both), and (2) they assume
availability of prior information about such structures which may not always be available. For
example, Multivariate Regression with Covariance Estimation [17] (MRCE) is a recently proposed
method which learns the output structure (in form of the covariance matrix for correlated noise
across multiple outputs) along with the regression coefficients (i.e., the weight vector) for predicting
each output. However MRCE does not explicitly model the relationships among the regression
coefficients of the multiple tasks and therefore fails to account for the task structure. More recently,
[14] proposed an extension of the MRCE model by allowing weighting the individual entries of
the regression coefficients and the entries of the output (inverse) covariance matrix, but otherwise
this model has essentially the same properties as MRCE. Among other works, Graph-guided Fused
Lasso [11] (GFlasso) incorporates task structure to some degree by assuming that the regression
coefficients of all the outputs have similar sparsity patterns. This amounts to assuming that all
the outputs share almost same set of relevant features. However, GFlasso assumes that output graph
structure is known which is rarely true in practice. Some other methods such as[13] take into account
the task structure by imposing structural sparsity on the regression coefficients of the multiple tasks
but again assume that output structure is known a priori and/or is of a specific form. In [22], the
authors proposed a multitask learning model by explicitly modeling the task structures as the task
covariance matrix but this model does not take into account the output structure which is important
in multiple-output regression problems.
In this paper, we present a multiple-output regression model that allows leveraging both output
structure and task structure without assuming an a priori knowledge of either. In our model, both
output structure and task structure are learned from the data, along with the regression coefficients
for each task. Specifically, we model the output structure using the (inverse) covariance matrix of
the correlated noise across the multiple outputs, and the task structure using the (inverse) covariance
matrix of the regression coefficients of the multiple tasks being learned in the model. By explicitly
modeling and learning the output structure and task structure, our model also addresses the limitations of the existing models that typically assume certain specific type of output structures (e.g.,
tree [13]) or task structures (e.g., shared sparsity [11]). In particular, a model with task relatedness
structure based on shared sparsity on the task weight vectors may not be appropriate in many real
applications where all the features are important for prediction and the true task structure is at a
more higher level (e.g., weight vectors for some tasks are closer to each other compared to others).
Apart from providing a flexible way of learning multiple-output regression, our model can also be
used for the problem of conditional inverse covariance estimation of the (multivariate) outputs that
depend on another set of inputs variables, an important problem that has been gaining significant
attention recently [23, 15, 20, 4, 7, 6].
2
Multiple-Output Regression
In multiple-output regression, each input is associated with a vector of responses and the goal is
the learn the input-output relationship given some training data consisting of input-output pairs.
Formally, given an N ? D input matrix X = [x1 , . . . , xN ]? and an N ? K output matrix Y =
[y1 , . . . , yN ]? , the goal in multiple-output regression is to learn the functional relationship between
the inputs xn ? RD and the outputs yn ? RK . For a linear regression model, we write:
y n = W ? x n + b + ?n
?n = 1, . . . , N
(1)
Here W = [w1 , . . . , wK ] denotes the D ? K matrix where wk denotes the regression coefficient
of the k-th output, b = [b1 , . . . , bK ]? ? RK is a vector of bias terms for the K outputs, and
?n = [?n1 , . . . , ?nK ]? ? RK is a vector consisting of the noise for each of the K outputs. The noise
is typically assumed to be Gaussian with a zero mean and uncorrelated across the K outputs.
Standard parameter estimation for Equation 1 involves maximizing the (penalized) log-likelihood of
the model, or equivalently minimizing the (regularized) loss function over the training data:
arg min tr((Y ? XW ? 1b? )(Y ? XW ? 1b? )? ) + ?R(W)
W,b
2
(2)
where tr(.) denotes matrix trace, 1 an N ?1 vector of all 1s and R(W) the regularizer on the weight
matrix W consisting of the regression weight vectors of all the outputs. For a choice of R(W) =
tr(W? W) (the ?2 -squared norm, equivalent to assuming independent, zero-mean Gaussian priors
on the weight vectors), solving Equation 2 amounts to solving K independent regression problems
and this solution ignores any correlations among the outputs or among the weight vectors.
3
Multiple-Output Regression with Output and Task Structures
To take into account both conditional output covariance and the covariance among the weight vectors W = [w1 , . . . , wK ], we assume a full covariance matrix ? of size K ? K on the output
noise distribution to capture conditional output covariance, and a structured prior distribution on the
weight vector matrix W that induces structural regularization of W. We place the following prior
distribution on W
p(W) ?
K
Y
Nor(wk |0, ID )MN D?K (W|0D?K , ID ? ?)
(3)
k=1
where MN D?K (M, A ? B) denotes the matrix-variate normal distribution with M ? RD?K
being its mean, A ? RD?D its row-covariance matrix and B ? RK?K its column-covariance
matrix. Here ? denotes the Kronecker product. In this prior distribution, the Nor(wk |0, ID ) factors
regularize the weight vectors wk individually, and the MN D?K (W|0D?K , ID ? ?) term couples
the K weight vectors, allowing them to share statistical strength.
To derive our objective function, we start by writing down the likelihood of the model, for a set of
N i.i.d. observations:
N
Y
n=1
p(yn |xn , W, b) =
N
Y
Nor(yn |W? xn + b, ?)
(4)
n=1
In the above, a diagonal ? would imply that the K outputs are all conditionally independent of
each other. In this paper, we assume a full ? which will allow us to capture the conditional output
correlations.
Combining the prior on W and the likelihood, we can write down the posterior distribution of W:
QN
p(W|X, Y, b, ?, ?) ? p(W) n=1 p(yn |xn , W, b)
QK
QN
= k=1 Nor(wk |0, ID ) MN D?K (W|0D?K , ID ? ?) n=1 Nor(yn |W? xn + b, ?)
Taking the log of the above and simplifying the resulting expression, we can then write the negative
log-posterior of W as (ignoring the constants):
tr((Y ? XW ? 1b? )??1 (Y ? XW ? 1b? )? ) + N log |?| + tr(WW? )
+ tr(W??1 W? ) + D log |?|
where 1 denotes a N ? 1 vector of all 1s. Note that in the term tr(W??1 W? ), the inverse
covariance matrix ??1 plays the role of coupling pairs of weight vectors, and therefore controls
the amount of sharing between any pair of tasks. The task covariance matrix ? as well as the
conditional output covariance matrix ? will be learned from the data. For reasons that will become
apparent later, we parameterize our model in terms of the inverse covariance matrices ??1 and ??1
instead of covariance matrices. With this parameterization, the negative log-posterior becomes:
tr((Y ? XW ? 1b? )??1 (Y ? XW ? 1b? )? ) ? N log |??1 | + tr(WW? )
+ tr(W??1 W? ) ? D log |??1 |
(5)
The objective function in Equation 5 naturally imposes positive-definite constraints on the inverse
covariance matrices ??1 and ??1 . In addition, we will impose sparsity constraints (via an ?1
penalty) on ??1 and ??1 . Sparsity on these parameters is appealing in this context for two reasons: (1) Sparsity leads to improved robust estimates [19, 8] of ??1 and ??1 , and (2) Sparsity
supports the notion that the output correlations and the task correlations tend to be sparse [21, 4, 8]
3
? not all pairs of outputs are related (given the inputs and other outputs), and likewise not all task
pairs (and therefore the corresponding weight vectors) are related. Finally, we will also introduce
regularization hyperparameters to control the trade-off between data-fit and model complexity. Parameter estimation in the model involves minimizing the negative log-posterior which is equivalent
to minimizing the (regularized) loss function. The minimization problem is given as
tr((Y ? XW ? 1b? )??1 (Y ? XW ? 1b? )? ) ? N log |??1 | + ? tr(WW? )
arg min
W,b,??1 ,??1
+?1 tr(W??1 W? ) ? D log |??1 | + ?2 ||??1 ||1 + ?3 ||??1 ||1
(6)
where ||A||1 denotes the sum of absolute values of the matrix A. Note that by replacing the regularizer tr(WW? ) with a sparsity inducing regularizer on the individual weight vectors w1 , . . . , wK ,
one can also learn Lasso-like sparsity [19] in the regression weights. In this exposition, however,
we consider ?2 regularization on the regression weights and let the tr(W??1 W? ) term capture the
similarity between the weights of two tasks by learning the task inverse covariance matrix ??1 . The
above cost function is not jointly convex in the variables but is individually convex in each variable
when others are fixed. We adopt an alternating optimization strategy that was empirically observed
to converge in all our experiments. More details are provided in the experiments section. Finally,
although it is not the main goal of this paper, since our model provides an estimate of the inverse
covariance structure ??1 of the outputs conditioned on the inputs, it can also be used for the more
general problem of estimating the conditional inverse covariance [23, 15, 20, 4, 7] of a set of variables y = {y1 , . . . , yK } conditioned on another set of variables x = {x1 , . . . , xD }, given paired
samples of the form {(x1 , y1 ), . . . , (xN , yN )}.
3.1
Special Cases
In this section, we show that our model subsumes/generalizes some previously proposed models for
multiple-output regression. Some of these include:
? Multivariate Regression with Covariance Estimation (MRCE-?2 ): With the task inverse covariance matrix ??1 = IK and the bias term set to zero, our model results in
the ?2 regularized weights variant of the MRCE model [17] which would be equivalent to
minimizing the following objective:
arg min tr((Y ? XW)??1 (Y ? XW)? ) + ? tr(WW? ) ? N log |??1 | + ?2 ||??1 ||1
W,??1
? Multitask Relationship Learning for Regression (MTRL): With the output inverse covariance matrix ??1 = IK and the sparsity constraint on ??1 dropped, our model results
in the regression version of the multitask relationship learning model proposed in [22].
Specifically, the corresponding objective function would be:
arg min tr((Y?XW)(Y?XW)? )+? tr(WW? )+?1 tr(W??1 W? )?D log |??1 |
W,??1
In [22], the ? log |??1 | term is dropped since the authors solve their cost function in terms of ?
and this term is concave in ?. A constraint of tr(?) = 1 was introduced in its place to restrict the
complexity of the model. We keep the log | ? | constraint in our cost function since we parameterize
our model in terms of ??1 , and ? log |??1 | is convex in ??1 .
3.2
Optimization
We take an alternating optimization approach to solve the optimization problem given by Equation 6. Each sub-problem in the alternating optimization steps is convex. The matrices ? and ? are
initialized to I in the beginning. The bias vector b is initialized to N1 Y? 1.
Optimization w.r.t. W when ??1 , ??1 and b are fixed:
Given ??1 , ??1 , b, the matrix W consisting of the regression weight vectors of all the tasks can
be obtained by solving the following optimization problem:
arg min tr((Y ?XW?1b? )??1 (Y ?XW?1b? )? )+? tr(WW? )+?1 tr(W??1 W? ) (7)
W
4
? is given by solving the following system of linear equations w.r.t. W:
The estimate W
?1
? ? X? X + ?1 ??1 + ?IK ? ID vec(W) = vec(X? (Y ? 1b? )??1 )
(8)
It is easy to see that with ? and ? set to identity, the model becomes equivalent to solving K
regularized independent linear regression problems.
Optimization w.r.t. b when ??1 , ??1 and W are fixed:
Given ??1 , ??1 , W, the bias vector b for all the K outputs can be obtained by solving the following optimization problem:
arg min tr((Y ? XW ? 1b? )??1 (Y ? XW ? 1b? )? )
b
? is given by b
?=
The estimate b
1
N
PN
n=1 (Y
(9)
? XW)? 1
Optimization w.r.t. ??1 when ??1 , W and b are fixed:
Given ??1 , W, b, the task inverse covariance matrix ??1 can be estimated by solving the following
optimization problem:
arg min ?1 tr(W??1 W? ) ? D log |??1 | + ?3 ||??1 ||1
??1
(10)
It is easy to see that the above is an instance of the standard inverse covariance estimation problem
with sample covariance ?D1 W? W, and can be solve using standard tools for inverse covariance
estimation. We use the graphical Lasso procedure [8] to solve Equation 10 to estimate ??1 :
? ?1 = gLasso( ?1 W? W, ?3 )
(11)
?
D
If we assume ??1 to be non-sparse, we can drop the ?1 penalty on ??1 from Equation 10. However,
the solution to ??1 will not be defined (when K > D) or will overfit (when K is of the same order
as D). To avoid this, we add a regularizer of the form ? tr(??1 ) to Equation 10. This can be seen as
imposing a matrix variate Gaussian prior on ??1/2 with both row and column covariance matrices
equal to I to make the solution well defined. In the previous case of sparse ??1 , the solution was
well defined because of the sparsity prior on ??1 . The optimization problem for ??1 is then given
as
?1
?
?1
?1
arg min
?
tr(W?
W
)
?
D
log
|?
|
+
?
tr
?
.
(12)
1
??1
?1
?
Equation 12 admits a closed form solution which is given by ?1 W DW+?I
. For the non-sparse
??1 case, we keep the parameter ? same as the hyperparameter for the term tr(WW? ) in Equation 6.
Optimization w.r.t. ??1 when ??1 , W and b are fixed:
Given ??1 , W, b, the task inverse covariance matrix ??1 can be estimated by solving the following
optimization problem:
arg min
tr((Y ? XW ? 1b? )??1 (Y ? XW ? 1b? )? ) ? N log |??1 | + ?2 ||??1 ||1
?1
?
(13)
It is again easy to see that the above problem is an instance of the standard inverse covariance estimation problem with sample covariance N1 (Y ?XW?1b? )? (Y ?XW?1b? ), and can be solved
using standard tools for inverse covariance estimation. We use the graphical Lasso procedure [8] to
solve Equation 10 to estimate ??1 :
? ?1 = gLasso( 1 (Y ? XW ? cb? )? (Y ? XW ? cb? ), ?2 )
(14)
?
N
4
Experiments
In this section, we evaluate our model by comparing it with several relevant baselines on both synthetic and real-world datasets. Our main set of results are on multiple-output regression problems
on which we report mean-squared errors averaged across all the outputs. However, since our model
also provides an estimate of the conditional inverse covariance structure ??1 of the outputs, in Section 4.3 we provide experimental results on the structure recovery task as well. We compare our
method with following baselines:
5
? Independent regressions (RLS): This baseline learns regularized least squares (RLS) regression model for each output, without assuming any structure among the weight vectors
or among the outputs. This corresponds to our model with ? = IK and ? = IK . The
weight vector of each individual problem is ?2 regularized with a hyperparameter ?.
? Curds and Whey (C&W): The predictor in Curds and Whey [3] takes the form Wcw =
Wrls U?U? , where Wrls denotes the regularized least squares predictor, the columns
of matrix U are the projection directions for the responses Y obtained from canonical
correlation analysis (CCA) of X and Y, and U? denotes Moore-Penrose pseudoinverse of
U. The diagonal matrix ? contains the shrinkage factors for each CCA projection direction.
? Multi-task Relationship Learning (MTRL): This method leverages task relationships
by assuming a matrix-variate prior on the weight matrix W [22]. We chose this baseline
because of its flexibility in modeling the task relationships by ?discovering? how the weight
vectors are related (via ??1 ), rather than assuming a specific structure on them such as
shared sparsity [16], low-rank assumption [2], etc. However MTRL in the multiple-output
regression setting cannot take into account the output structure. It is therefor a special case
of our model if we assume the output inverse covariance matrix ??1 = I. The MTRL
approach proposed in [22] does not have sparse penalty on ??1 . We experimented with
both sparse and non-sparse variants of MTRL and report the better of the two results here.
? Multivariate Regression with Covariance Estimation (MRCE-?2 ): This baseline is the
?2 regularized variant of the MRCE model [17]. MRCE leverages output structure by
assuming a full noise covariance in multiple-output regression and learning it along with the
weight matrix W from the data. MRCE however cannot take into account the task structure
because it cannot capture the relationships among the columns of W. It is therefore a
special case of our model if we assume the task inverse covariance matrix ??1 = I. We
do not compare with the original ?1 regularized MRCE [17] to ensure a fair comparison by
keeping all the models non-sparse in weight vectors.
In the experiments, we refer to our model as MROTS (Multiple-output Regression with Output
and Task Structures). We experiment with two variants of our proposed model, one without a
sparsity inducing penalty on the task coupling matrix ??1 (called MROTS-I), and the other with
the sparse penalty on ??1 (called MROTS-II). The hyperparameters are selected using four-fold
cross-validation. Both MTRL and MRCE-?2 have two hyperparameters each and these are selected
by searching on a two-dimensional grid. For the proposed model with non-sparse ??1 , we fix the
hyperparameter ? in Equations 6 and 12 as 0.001 for all the experiments. This is used to ensure
that the task inverse covariance matrix estimate ???1 exists and is robust when number of response
variables K is of the same order or larger than the input dimension D. The other two parameters
?1 and ?2 are selected using cross-validation. For sparse ??1 case, we use the same values of ?1
and ?2 that were selected for non-sparse case, and only the third parameter ?3 is selected by crossvalidation. This procedure avoids a potentially expensive search over a three dimensional grid. The
hyperparameter ? in Equation 6 is again fixed at 0.001.
4.1
Synthetic data
We describe the process for synthetic data generation here. First, we generate a random positive
definite matrix ??1 which will act as the task inverse covariance matrix. Next, a matrix V of size
D?K is generated with each entry sampled from a zero mean and 1/D variance normal distribution.
We compute the square-root S of ? (= SS, where S is also a symmetric positive definite matrix),
and S is used to generate the final weight matrix W as W = VS. It is clear that for a W generated
in this fashion, we will have E[WT W] = SS = ?. This process generates W such that its
columns (and therefore the weight vectors for different outputs) are correlated. A bias vector b of
size K is generated randomly from a zero mean unit variance normal distribution. Then we generate
a sparse random positive definite matrix ??1 that acts as the conditional inverse covariance matrix
on output noise making the outputs correlated (given the inputs). Next, input samples are generated
i.i.d. from a normal distribution and the corresponding multivariate output variables are generated
as yi = Wxi + b + ?i , ?i = 1, 2, . . . , N , where ?i is the correlated noise vector randomly sampled
from a zero mean normal distribution with covariance matrix ?.
We generate three sets of synthetic data using the above process to gauge the effectiveness of the
proposed model under varying circumstances: (i) D = 20, K = 10 and non-sparse ??1 , (ii)
6
Method
RLS
C&W
MTRL
MRCE-?2
MROTS-I
MROTS-II
Synth data I
37.29
37.14
34.45
29.84
26.65
25.90
Synth data II
3.22
21.88
3.12
3.08
2.61
2.60
Synth data III
3.94
7.06
3.86
3.92
3.75
3.55
Paper I
1.08
1.08
1.07
1.36
0.90
0.90
Paper II
1.04
1.08
1.03
1.03
1.03
1.03
Gene data
1.92
1.51
1.24
1.55
1.18
1.20
Table 1: Prediction error (MSE) on synthetic and real datasets. RLS: Independent regression, C&W: Curds
and Whey model [3], MTRL: Multi-task relationship learning [22], MRCE-?2 : The ?2 -regularized version of
MRCE [17], MROTS-I: our model without sparse penalty on ??1 , MROTS-II: our model with sparse penalty
on ??1 . Best results are highlighted in bold fonts.
D = 10, K = 20 and non-sparse ??1 , and (iii) D = 10, K = 20 and sparse ??1 . We also
experiment with varying number of training samples (N = 20, 30, 40 and 50).
2
MROTS?I
MROTS?II
150
100
50
0
10
20
30
40
50
Number of training samples
(a) Synthetic data I
60
2
4
MROTS?I
MROTS?II
3.5
3
2.5
10
20
30
40
50
Number of training samples
60
(b) Synthetic data II
50
MSE
OBJ. VALUE
40
30
20
0
10
20
Iterations
(c) Synthetic data I
30
1.4
MSE or Obj. value
200
4.5
60
RLS
MTRL
MRCE?l
MSE or Obj. value
Mean square error
RLS
C&W
MTRL
MRCE?l
250
Mean square error
5
300
MSE
OBJ. VALUE
1.2
1
0.8
0
10
20
Iterations
30
(d) Paper data I
Figure 1: (a) and (b): Mean Square Error with varying number of training samples, (c) and (d): Mean Square
Error and the value of the Objective function with increasing iterations for the proposed method.
4.2
Real data
We also evaluate our model on the following real-world multiple-output regression datasets:
? Paper datasets: These are two multivariate multiple-response regression datasets from
paper industry [1]. The first dataset has 30 samples with each sample having 9 features and
32 outputs. The second dataset has 29 samples (after ignoring one sample with missing
response variables), each having 9 features and 13 outputs. We take 15 samples for training
and the remaining samples for test.
? Genotype dataset: This dataset has genotypes as input variables and phenotypes or observed traits as output variables [12]. The number of genotypes (features) is 25 and the
number of phenotypes (outputs) is 30. We have a total of 100 samples in this dataset and
we split it equally into training and test data.
The results on synthetic and real-world datasets are shown in Table 1. For synthetic datasets, the
reported results are with 50 training samples. Independent linear regression performs the worst on all
synthetic datasets. MRCE-?2 performs better than MTRL on first and second synthetic data while
MTRL is better on the third dataset. This mixed behavior of MRCE-?2 and MTRL supports our
motivation that both task structure (i.e., relationships among weight vectors) and output structure are
important in multiple-output regression. Both MTRL and MRCE-?2 are special cases of our model
with former ignoring the output structure (captured by ??1 ) and the latter ignoring the weight vector
relationships (captured by ??1 ). Both variants of our model (MROTS-I and MROTS-II) perform
significantly better than the compared baselines. The improvement with sparse ??1 variant is more
prominent on the third dataset which is generated with sparse ??1 (5.33% relative reduction in
MSE), than on the first two datasets (2.81% and 0.3% relative reduction in MSE). However, in our
experiments, the sparse ??1 variant (MROTS-II) always performed better or as good as the nonsparse variant on all synthetic and real datasets, which suggests that explicitly encouraging zero
entries in ??1 leads to better estimates of task relationships (by avoiding spurious correlations
between weight vectors). This can potentially improve the prediction performance. Finally, we also
note that the Curds & Whey method [3] performs significantly worse than RLS for Synthetic data II
and III. C&W uses CCA to project the response matrix Y to a lower min(D, K)-dimensional space
learning min(D, K) predictors there and then projecting them back to the original K-dimensional
7
space. This procedure may end up throwing away relevant information from responses if K is
much higher than D. These empirical results suggest that C&W may adversely affect the prediction
performance when the number of response variables K is higher than the number of explanatory
variables D (D = 2K in these cases).
On the real-world datasets too, our model performs better than or on par with the compared baselines. Both MROTS-I and MROTS-II perform significantly better than the other baselines on the
first Paper dataset (9 features and 32 outputs per sample). All models perform almost similarly on
the second Paper dataset (9 features and 13 outputs per sample), which could be due to the absence
of a strong task or output structure in this data. C&W does not preform well on both Paper datasets
which might be due to the reason discussed earlier. On the genotype-phenotype prediction task
too, both our models achieve better average mean squared errors than the other baselines, with both
variants performing roughly comparably.
We also evaluate our model?s performance with varying number of training examples and compare
with the other baselines. Figures 1(a) and 1(b) show the plots of mean square error vs. number of
training examples for first two synthetic datasets. We do not plot C&W for Synthetic data II since
it performs worse than RLS. On the first synthetic data, the performance gain of our model is more
pronounced when number of training examples is small. For the second synthetic data, we retain
similar performance gain over other models when number of training examples are increased from
20. The MSE numbers for the first synthetic data are higher than the ones obtained for the second
synthetic data because of a difference in the magnitude of error covariances used in the generation
of datasets.
We also experiment with the convergence properties of our method. Figures 1(c) and 1(d) show that
plots of average MSE and the value of the objective function (given by Equation 6) with increasing
number of iterations on the first synthetic dataset and the first Paper dataset. The plots show that our
alternating optimization procedure converges in roughly 10?15 iterations.
4.3
Covariance structure recovery
Although not the main goal of the paper, we experiment with learned inverse covariance
matrix of the outputs (given the inputs) as a sanity check on the proposed model. To
better visualize, we generate a dataset with 5 responses and 3 predictors using the same
process as described in Sec. 4.1. Figure on the right shows the true conditional inverse
? ?1 (Middle), and the
covariance matrix ??1 (Top), the matrix learned with MROTS ?
precision matrix learned with graphical lasso ignoring the predictors (Bottom). Taking
into account the regression weights results in better estimate of the true covariance
matrix. We got similar results for MRCE-?2 which also takes into account the predictors
while learning the inverse covariance, although MROTS estimates were closer to the
ground truth in terms of the Frobenius norm.
5
Related Work
Apart from the prior works discussed in Section 1, our work has connections to some other works
which we discuss in this section. Recently, Sohn & Kim [18] proposed a model for jointly estimating the weight vector for each output and the covariance structure of the outputs. However, they
assume a shared sparsity structure on the weight vectors. This assumption may be restrictive in some
problems. Some other works on conditional graphical model estimation [20, 4] are based on similar
structural sparsity assumptions. In contrast, our model does not assume any specific structure on the
weight vectors, and by explicitly modeling the covariance structure of the weight vectors, learns the
appropriate underlying structure from the data.
6
Future Work and Conclusion
We have presented a flexible model for multiple-output regression taking into account the covariance
structure of the outputs and the covariance structure of the underlying prediction tasks. Our model
does not require a priori knowledge of these structures and learns these from the data. Our model
leads to improved accuracies on multiple-output regression tasks. Our model can be extended in
several ways. For example, one possibility is to model nonlinear input-output relationships by kernelizing the model along the lines of [22].
8
References
[1] M. Aldrin. Moderate projection pursuit regression for multivariate response data. Computational Statistics and Data Analysis, 21, 1996.
[2] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Multi-task feature learning.
In NIPS, 2007.
[3] L. Breiman and J.H. Friedman. Predicting multivariate responses in multiple linear regression.
Journal of the Royal Statistical Society. Series B (Methodological), pages 3?54, 1997.
[4] T. Cai, H. Li, W. Liu, and J. Xie. Covariate adjusted precision matrix estimation with an
application in genetical genomics. Biometrika, 2011.
[5] Rich Caruana. Multitask Learning. Machine Learning, 28, 1997.
[6] J. Cheng, E. Levina, P. Wang, and J. Zhu.
Sparse ising models with covariates.
arXiv:1209.6342v1, 2012.
[7] S. Ding, G. Wahba, and J. X. Zhu. Learning higher-order graph structure with features by
structure penalty. In NIPS, 2011.
[8] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432?441, 2008.
[9] P. Goovaerts. Geostatistics For Natural Resources Evaluation. Oxford University Press, 1997.
[10] T. Heskes. Empirical Bayes for learning to learn. ICML, 2000.
[11] S. Kim, K. Sohn, and E. P. Xing. A multivariate regression approach to association analysis of
a quantitative trait network.
[12] S. Kim and E. P. Xing. Statistical estimation of correlated genome associations to a quantitative
trait network. PLoS Genetics, 2009.
[13] S. Kim and E. P. Xing. Tree-guided group lasso for multi-response regression with structured
sparsity, with an application to eQTL mapping. Annals of Applied Statistics, 2012.
[14] W. Lee and Y. Liu. Simultaneous multiple response regression and inverse covariance matrix estimation via penalized gaussian maximum likelihood. Journal of Multivariate Analysis,
2012.
[15] H. Liu, X. Chen, J. Lafferty, and L. Wasserman. Graph-valued regression. In NIPS, 2010.
[16] G. Obozinskiy, M. J. Wainwright, and M. I. Jordan. Union support recovery in highdimensional multivariate regression. In NIPS, 2010.
[17] A. J. Rothman, E. Levina, and J. Zhu. Sparse multivariate regression with covariance estimation. Journal of Computational and Graphical Statistics, 2010.
[18] K.A. Sohn and S. Kim. Joint estimation of structured sparsity and output structure in multipleoutput regression via inverse-covariance regularization. In AISTATS, 2012.
[19] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of Royal Statistical
Society, 1996.
[20] J. Yin and H. Li. A sparse conditional gaussian graphical model for analysis of genetical
genomics data. The Annals of Applied Statistics, 2011.
[21] Y. Zhang and J. Schneider. Learning Multiple Tasks with a Sparse Matrix-Normal Penalty. In
NIPS, 2010.
[22] Y. Zhang and D. Yeung. A convex formulation for learning task relationships in multi-task
learning. In UAI, 2010.
[23] S. Zhou, J. Lafferty, and L. Wasserman. Time varying undirected graphs. Machine Learning
Journal, 2010.
9
| 4501 |@word multitask:5 version:2 middle:1 norm:2 covariance:60 simplifying:1 tr:31 reduction:2 liu:3 contains:1 series:1 past:1 existing:4 comparing:1 drop:1 plot:4 v:2 alone:1 discovering:1 selected:5 parameterization:1 beginning:1 provides:2 theodoros:1 zhang:2 along:4 become:1 ik:5 introduce:1 roughly:2 behavior:1 frequently:1 nor:5 multi:5 encouraging:1 increasing:2 becomes:2 provided:1 estimating:4 moreover:1 underlying:3 project:1 biostatistics:1 preform:1 quantitative:2 act:2 concave:1 xd:1 biometrika:1 control:2 unit:1 yn:7 positive:4 dropped:2 id:7 oxford:1 might:1 chose:1 suggests:1 limited:1 averaged:1 practice:1 union:1 definite:4 procedure:5 pontil:1 goovaerts:1 empirical:2 significantly:3 got:1 projection:3 suggest:1 cannot:3 selection:1 context:1 writing:1 equivalent:4 missing:1 maximizing:1 attention:1 convex:5 recovery:3 wasserman:2 d1:1 importantly:1 regularize:1 dw:1 searching:1 notion:1 annals:2 play:1 distinguishing:1 us:1 hypothesis:1 expensive:1 econometrics:1 ising:1 observed:3 role:1 bottom:1 ding:1 solved:1 capture:4 parameterize:2 worst:1 wang:1 plo:1 trade:1 yk:1 complexity:2 covariates:1 depend:1 solving:8 joint:1 stock:2 various:1 tx:1 regularizer:4 massimiliano:1 describe:1 sanity:1 apparent:1 larger:1 valued:3 solve:5 s:2 otherwise:1 statistic:5 jointly:3 highlighted:1 final:1 cai:1 product:1 macro:1 relevant:4 combining:1 flexibility:1 achieve:1 inducing:2 pronounced:1 frobenius:1 inputoutput:1 crossvalidation:1 convergence:1 converges:1 piyush:2 derive:1 coupling:2 strong:1 c:2 involves:2 direction:2 guided:2 require:2 polymorphism:1 fix:1 rothman:1 adjusted:1 extension:1 considered:1 ground:1 normal:6 cb:2 mapping:1 predict:2 visualize:1 adopt:1 estimation:19 eqtl:1 utexas:1 individually:2 gauge:1 tool:2 minimization:1 always:2 gaussian:5 rather:1 pn:1 avoid:1 shrinkage:2 breiman:1 varying:5 zhou:1 improvement:1 methodological:1 rank:1 likelihood:4 check:1 contrast:2 baseline:10 sense:1 kim:5 typically:2 explanatory:1 spurious:1 interested:1 arg:9 among:11 flexible:2 priori:5 special:5 equal:1 evgeniou:1 having:2 biology:1 park:2 rls:8 icml:1 future:2 others:2 report:2 randomly:2 simultaneously:2 individual:3 consisting:5 n1:3 friedman:2 mrce:21 possibility:1 evaluation:1 genotype:4 closer:2 nucleotide:1 tree:2 initialized:2 instance:3 column:5 modeling:4 industry:1 earlier:1 increased:1 caruana:1 cost:3 entry:4 predictor:8 too:2 reported:1 synthetic:22 retain:1 lee:1 off:1 fused:1 w1:3 again:3 squared:3 worse:2 adversely:1 li:2 account:10 sec:1 bold:1 wk:8 availability:1 coefficient:11 subsumes:1 explicitly:5 later:1 root:1 performed:1 closed:1 start:1 bayes:1 xing:3 square:8 accuracy:2 qk:1 variance:2 likewise:1 comparably:1 none:1 simultaneous:1 sharing:1 naturally:1 associated:1 couple:1 sampled:2 gain:2 dataset:13 knowledge:2 back:1 higher:5 xie:1 response:16 improved:3 formulation:1 correlation:6 overfit:1 expressive:1 replacing:1 nonlinear:1 hal:2 true:4 former:1 regularization:6 alternating:4 symmetric:1 moore:1 conditionally:1 nonsparse:1 prominent:1 demonstrate:1 performs:5 snp:1 recently:4 functional:1 empirically:1 discussed:2 association:2 trait:3 refer:2 significant:1 imposing:3 vec:2 rd:3 grid:2 heskes:1 similarly:1 therefor:1 similarity:1 etc:1 add:1 multivariate:14 posterior:4 moderate:1 apart:2 certain:1 yi:1 captured:3 seen:2 impose:1 schneider:1 employed:1 converge:1 ii:14 multiple:38 desirable:1 full:3 levina:2 cross:2 dept:3 equally:2 paired:1 prediction:9 variant:9 regression:67 essentially:1 circumstance:1 arxiv:1 iteration:5 yeung:1 whey:4 addition:2 want:2 umd:2 umiacs:1 unlike:1 tend:1 undirected:1 leveraging:2 incorporates:1 effectiveness:2 obj:4 lafferty:2 jordan:1 structural:5 leverage:4 iii:4 easy:3 split:1 affect:1 variate:3 fit:1 hastie:1 lasso:8 restrict:1 wahba:1 economic:1 andreas:1 texas:1 expression:2 penalty:9 clear:1 amount:3 induces:1 sohn:3 generate:5 canonical:1 estimated:2 per:2 tibshirani:2 write:3 hyperparameter:4 curd:4 group:1 four:1 v1:1 graph:5 genetical:2 sum:1 inverse:30 place:2 almost:2 cca:3 cheng:1 fold:1 encountered:1 strength:1 kronecker:1 constraint:5 throwing:1 generates:1 aspect:1 min:11 kumar:1 multipleoutput:2 performing:1 structured:3 rai:1 wxi:1 across:4 appealing:1 making:1 explained:1 projecting:1 equation:14 resource:1 previously:2 turn:1 discus:1 end:1 available:1 generalizes:1 pursuit:1 away:1 appropriate:2 kernelizing:1 original:2 assumes:1 denotes:9 include:1 ensure:2 remaining:1 graphical:8 top:1 xw:23 daum:1 exploit:1 restrictive:2 society:2 objective:6 font:1 strategy:1 concentration:1 md:2 diagonal:2 maryland:2 reason:3 assuming:8 relationship:15 providing:1 minimizing:4 equivalently:1 potentially:2 synth:3 trace:1 negative:3 unknown:1 contributed:1 allowing:2 perform:3 observation:1 datasets:15 extended:1 y1:3 ww:8 bk:1 introduced:1 pair:6 connection:1 learned:7 geostatistics:2 nip:5 address:1 usually:3 pattern:1 sparsity:18 gaining:1 royal:2 wainwright:1 power:1 natural:1 regularized:10 predicting:5 residual:1 mn:4 zhu:3 improve:2 imply:1 genomics:2 prior:10 relative:2 loss:2 glasso:2 par:1 mixed:1 generation:2 limitation:1 validation:2 degree:1 metal:1 imposes:1 uncorrelated:1 share:3 heavy:1 austin:2 row:2 genetics:1 penalized:2 keeping:1 bias:5 allow:1 pollutant:1 taking:3 absolute:1 sparse:26 dimension:1 xn:7 world:4 avoids:1 rich:2 qn:2 ignores:1 author:2 genome:1 relatedness:3 gene:3 keep:2 pseudoinverse:1 uai:1 b1:1 assumed:1 abhishek:2 search:1 latent:1 table:2 learn:6 robust:2 ignoring:5 mse:9 domain:1 aistats:1 main:3 motivation:1 noise:9 hyperparameters:3 fair:1 x1:3 referred:1 fashion:1 precision:2 fails:1 sub:1 weighting:1 mtrl:14 third:3 learns:4 rk:4 down:2 specific:4 covariate:1 experimented:1 admits:1 exists:1 magnitude:1 conditioned:4 nk:1 chen:1 phenotype:3 yin:1 penrose:1 corresponds:1 truth:1 conditional:12 goal:5 identity:1 exposition:1 price:2 shared:4 absence:1 specifically:2 wt:1 called:2 total:1 experimental:2 attempted:1 rarely:1 formally:1 college:2 highdimensional:1 support:3 latter:1 incorporate:1 evaluate:3 argyriou:1 avoiding:1 correlated:6 |
3,869 | 4,502 | High-Order Multi-Task Feature Learning to Identify
Longitudinal Phenotypic Markers for Alzheimer?s
Disease Progression Prediction
Hua Wang, Feiping Nie, Heng Huang,
Department of Computer Science and Engineering,
University of Texas at Arlington, Arlington, TX 76019
{huawangcs, feipingnie}@gmail.com, [email protected]
Jingwen Yan, Sungeun Kim, Shannon L. Risacher, Andrew J. Saykin, Li Shen, for the ADNI?
Department of Radiology and Imaging Sciences,
Indiana University School of Medicine, Indianapolis, IN 46202
{jingyan, sk31, srisache, asaykin, shenli}@iupui.edu
Abstract
Alzheimer?s disease (AD) is a neurodegenerative disorder characterized by progressive impairment of memory and other cognitive functions. Regression analysis has been studied to relate neuroimaging measures to cognitive status. However,
whether these measures have further predictive power to infer a trajectory of cognitive performance over time is still an under-explored but important topic in AD
research. We propose a novel high-order multi-task learning model to address this
issue. The proposed model explores the temporal correlations existing in imaging and cognitive data by structured sparsity-inducing norms. The sparsity of the
model enables the selection of a small number of imaging measures while maintaining high prediction accuracy. The empirical studies, using the longitudinal
imaging and cognitive data of the ADNI cohort, have yielded promising results.
1 Introduction
Neuroimaging is a powerful tool for characterizing neurodegenerative process in the progression
of Alzheimer?s disease (AD). Neuroimaging measures have been widely studied to predict disease
status and/or cognitive performance [1, 2, 3, 4, 5, 6, 7]. However, whether these measures have
further predictive power to infer a trajectory of cognitive performance over time is still an underexplored yet important topic in AD research. A simple strategy typically used in longitudinal studies
(e.g., [8]) is to analyze a single summarized value such as average change, rate of change, or slope.
This approach may be inadequate to distinguish the complete dynamics of cognitive trajectories
and thus become unable to identify underlying neurodegenerative mechanism. Figure 1 shows a
schematic example. Let us look at the plot of Cognitive Score 2. The red and blue groups can be
easily separated by their complete trajectories. However, given very similar score values at the time
points of t0 and t3, any of the aforementioned summarized values may not be sufficient to identify the
group difference. Therefore, if longitudinal cognitive outcomes are available, it would be beneficial
to use the complete information for the identification of relevant imaging markers [9, 10].
?
Data used in preparation of this article were obtained from the Alzheimer?s Disease Neuroimaging Initiative (ADNI) database (adni.loni.ucla.edu). As such, the investigators within the ADNI contributed to
the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.ucla.edu/wpcontent/uploads/how to apply/ADNI Acknowledgement List.pdf.
1
Figure 1: Longitudinal multi-task regression of cognitive trajectories on MRI measures.
However, how to identify the temporal imaging features that predict longitudinal outcomes is a challenging machine learning problem. First, the input data and response measures often are high-order
tensors, not regular data/label matrix. For example, both input neuroimaging measures (samples ?
features ? time) and output cognitive scores (samples ? scores ? time) are 3D tensors. Thus, it is
not trivial to build the longitudinal learning model for tensor data. Second, the associations between
features and a specific task (e.g. cognitive score) at two consecutive time points are often correlated.
How to efficiently include such correlations of associations cross time is unclear. Third, some longitudinal learning tasks are often interrelated to each other. For example, it is well known that [3, 4] in
RAVLT assessment, the total number of words remembered by the participants in the first 5 learning
trials heavily impacts the total number of words which can be recalled in the 6th learning trial, and
the results of these two measures both partially determines the final recognition rate after 30 minutes
delay. How to integrate such tasks correlations into longitudinal learning model is under-explored.
In this paper, we focus on the problem of predicting longitudinal cognitive trajectories using neuroimaging measures. We propose a novel high-order multi-task feature learning approach to identify longitudinal neuroimaging markers that can accurately predict cognitive scores over all the time
points. The sparsity-inducing norms are introduced to integrate the correlations existing in both
features and tasks. As a result, the selected imaging markers can fully differentiate the entire longitudinal trajectory of relevant scores and better capture the associations between imaging markers
and cognitive changes over time. Because the structured sparsity-inducing norms enforce the correlations along two directions of the learned coefficient tensor, the parameters in different sparsity
norms are tangled together by distinct structures and lead to a difficult optimization problem. We
derive an efficient algorithm to solve the proposed high-order multi-task feature learning objective
with closed form solution in each iteration. We further prove the global convergence of our algorithm. We apply the proposed longitudinal multi-task regression method to the ADNI cohort. In
our experiments, the proposed method not only achieves competitive prediction accuracy but also
identifies a small number of imaging markers that are consistent with prior knowledge.
2 High-Order Multi-Task Feature Learning Using Sparsity-Inducing Norms
For AD progression prediction using longitudinal phenotypic markers, the input imaging features
are a set of matrices X = {X1 , X2 , . . . , XT } ? Rd?n?T corresponding to the measurements at
T consecutive time points, where Xt is the phenotypic measurements for a certain type of imaging
markers, such as voxel-based morphometry (VBM) markers (see details in Section 3) used in this
study, at time t (1 ? t ? T ). Obviously, X is a tensor data with d imaging features, n subject
samples and T time points. The output cognitive assessments for the same set of subjects are a set of
matrices Y = {Y1 , Y2 , . . . , YT } ? Rn?c?T for a certain type of the cognitive measurements, such
as RAVLT memory scores (see details in Section 3), at the same T consecutive time points. Again,
Y is a tensor data with n samples, c scores, and T time points. Our goal is to learn from {X , Y} a
model that can reveal the longitudinal associations between the imaging and cognitive trajectories,
by which we expect to better understand how the variations of different regions of human brains
affect the AD progression, such that we can improve the diagnosis and treatment to the disease.
Prior regression analyses typically study the associations between imaging features and cognitive
measures at each time point separately, which is equivalent to assume that the learning tasks, i.e.,
cognitive measures, at different time points are independent. Although this assumption can simplify the problem and make the solution easier to obtain, it overlooks the temporal correlations of
imaging and cognitive measures. To address this, we propose to jointly learn a single longitudinal
regression model for the all time points to identify imaging markers which are associated to cog2
BT
e
B1
??
B2
Tasks
c
c
B1
d
Features
d
?
T
Tim
BT
cxT
B = fB1 ; : : : ; BT g
B(1) = unfold(1) (B) = [B1 ; : : : ; BT ]
B1T
B 2T
??
BTT
dxT #
$
B(2) = unfold(2) (B) = B1T ; : : : ; BTT
Figure 2: Left: visualization of the coefficient tensor B learned for the association study on longitudinal data. Middle: the matrix unfolded from B along the first mode (feature dimension). Right:
the matrix unfolded from B along the second mode (task dimension).
nitive patterns. As a result, we aim to learn a coefficient tensor (a stack of coefficient matrices)
B = {B1 , ? ? ? , Bn } ? Rd?c?T , as illustrated in the left panel of Figure 2, to reveal the temporal
changes of the coefficient matrices. Given the additional time dimension, our problem becomes a
difficult high-order data analysis problem, which we call as high-order multi-task learning.
2.1 Longitudinal Multi-Task Feature Learning
In order to associate the imaging markers and the cognitive measures, the multivariate regression
model was used in traditional association studies, which minimizes the following objective:
T
T X
d
2
X
X
min J0 =
B ?1 X T ? Y
+ ? kBk22 =
||bkt ||22 .
||XtT Bt ? Yt ||2F + ?
B
F
t=1
(1)
t=1 k=1
where bkt denotes the k-th row of coefficient matrix Bt at time t. Apparently, the objective J0 in
Eq. (1) can be decoupled for each individual time point. Therefore it does not take into account the
longitudinal correlations between imaging features and cognitive measures. Because our goal in the
association study is to select the imaging markers which are connected to the temporal changes of
all the cognitive measures, the T groups of regression tasks at different time points should not be
decoupled and have to be performed simultaneously. To achieve this, we select imaging markers
correlated to all the cognitive measures at all time points by introducing the sparse regularization
[11, 12, 13] into the longitudinal data regression and feature selection model as follows:
min J1 =
B
T
X
t=1
v
d u
T
T
uX
X
X
t
||XtT Bt ? Yt ||2F + ?
||bkt ||22 =
||XtT Bt ? Yt ||2F + ?
B(1)
2,1 ,
k=1
t=1
(2)
t=1
where we denote unfoldk (B) = B(k) ? RIk ?(I1 ...Ik?1 Ik+1 ...In ) as the unfolding operation to a general n-mode tensor B along the k-th mode, and B(1) = unfold1 (B) = [B1 , . . . , BT ] as illustrated
in the middle panel of Figure 2. By solving the objective J1 , the imaging features with common
influences across all the time points for all the cognitive measures will be selected due to the second
term in Eq. (2), which is a tensor extension of the widely used ?2,1 -norm for matrix.
2.2 High-Order Multi-Task Correlations
The objective J1 in Eq. (2) couples all the learning tasks together, which, though, still does not address the correlations among different learning tasks at different time points. As discussed earlier,
during the AD progression, many cognitive measures are interrelated together and their effects during the process could overlap, thus it is necessary to further develop the objective J1 in Eq. (2) to
leverage the useful information conveyed by the correlations among different cognitive measures.
In order to capture the longitudinal patterns of the AD data, we consider two types of tasks correlations. First, for an individual cognitive measure, although its association to the imaging features
at different stages of the disease could be different, its associations patterns at two consecutive time
points tend to be similar [9]. Second, we know that [4, 14] during the AD progression, different
cognitive measures are interrelated to each other. Mathematically speaking, the above two types of
correlations can both be described by the low ranks of the coefficient matrices unfolded from the
3
coefficient tensor along different modes. Thus we further develop our learning model in Eq. (2) to
impose additional low rank regularizations to exploit these task correlations.
Let B(2) = unfold2 (B) = B1T , . . . , BTT as illustrated in the right panel of Figure 2, we minimize
the ranks of B(1) and B(2) to capture the two types of task correlations, one for each type, as follows:
min J2 =
B
T
X
t=1
||XtT Bt ? Yt ||2F + ?
B(1)
2,1 + ?
B(1)
? +
B(2)
? ,
(3)
where k?k? denote the trace norm of a matrix. Given a matrix M ? Rn?m and its singular
Pmin (n,m)
values ?i (1 ? i ? min (n, m)), the trace norm of M is defined as kM k? =
?i =
i=1
1
T 2
Tr M M
. It has been shown that [15, 16, 17] the trace-norm is the best convex approximation
of the rank-norm. Therefore, the third and fourth terms of J2 in Eq. (3) indeed minimize the rank of
the unfolded learning model B, such that the two types of correlations among the learning tasks at
different time points can be utilized. Due to its capabilities for both imaging marker selection and
task correlation integration on longitudinal data, we call J2 defined in Eq. (3) as the proposed HighOrder Multi-Task Feature Learning model, by which we will study the problem of longitudinal data
analysis to predict cognitive trajectories and identify relevant imaging markers.
2.3 New Optimization Algorithm and Its Global Convergence
Despite its nice properties, our new objective J2 in Eq. (3) is a non-smooth convex problem. Some
existing methods can solve it, but not efficiently. Thus, in this subsection we will derive a new
efficient algorithm to solve this optimization problem with global convergence proof, where we
employ an iteratively reweighted method [18] to deal with the non-smooth regularization terms.
Taking the derivative of the objective J2 in Eq. (3) with respect to Bt and set it as 0, we obtain1 :
? t + Bt D
? =0 ,
2Xt XtT Bt ? 2Xt Yt + 2?DBt + 2? DB
(4)
?1/2
? =
? = 1 B(1) B T
where D is a diagonal matrix with D (i, i) = qPT 1 k 2 , D
and D
(1)
2
2
t=1 kbt k2
?1/2
1
T
B
B
. We can re-write Eq. (4) as following:
(2)
(2)
2
? Bt + ?Bt D
? = Xt Yt ,
Xt XtT + ?D + ? D
(5)
which is a Sylvester equation and can be solved in closed form. When the time t changes from 1 to
? and D
? are dependent on B
T , we can calculate Bt (1 ? t ? T ) by solving Eq. (5). Because D, D
and can be seen as latent variables, we propose an iterative algorithm to obtain the global optimum
solutions of Bt (1 ? t ? T ), which is summarized in Algorithm 1.
Convergence analysis of the new algorithm. We first prove the following two useful lemmas, by
which we will prove the convergence of Algorithm 1.
Lemma 1 Given a constant ? > 0, for function f (x) = x ?
x ? R. The equality holds if and only if x = ?.
x2
2? ,
we have f (x) ? f (?) for any
The proof of Lemma 1 is obvious and skipped due to space limit.
? the following inequality holds:
Lemma 2 Given two semi-positive definite matrices A and A,
1 1
? ? 12 ? tr A 21 ? 1 tr AA? 12 .
tr A? 2 ? tr AA
2
2
(6)
?
The equality holds if and only if A = A.
kM k2,1 is a non-smooth function of M and not differentiable when one of its row mi = 0. Following
P q
[18], we introduce a small perturbation ? > 0 to replace kM k2,1 by i kmi k22 + ?, which is smooth and
P q
differentiable with respect to M . Apparently, i kmi k22 + ? is reduced to kM k2,1 when ? ? 0. In the
sequel of this paper, we implicitly apply this replacement for all k?k2,1 . Following the same idea, we also
1
introduce a small perturbation ? > 0 to replace kM k? by tr M M T + ?I 2 for the same reason.
1
4
Algorithm 1: A new algorithm to solve the optimization problem in Eq. (3).
Data: X = [X1 , X2 , . . . , XT ] ? Rd?n?T , Y = [Y1 , Y2 , . . . , YT ] ? Rn?c?T .
(1)
1. Set g = 1. Initialize Bt ? Rd?c (1 ? t ? T ) using the linear regression results at each individual time point.
repeat
2. Calculate the diagonal matrix D(g) , where the i-th diagonal element is computed as D(g) (i, i) = s
1
(g),k
2
PT
t=1
bt
2
2
? (g) =
calculate D
1
2
(g+1)
(g)
B(1)
? 12
(g) T
? (g) =
; calculate D
B(1)
1
2
? 12
(g)
(g) T
.
B(2) B(2)
;
3. Update Bt
(1 ? t ? T ) by solving the Sylvester equation in Eq. (5).
4. g = g + 1.
until Converges
Result: B = [B1 , B2 , . . . , BT ] ? Rd?c?T .
Proof : Because A and A? are two semi-positive definite matrices and we know that tr AA? =
? , we can derive:
tr AA
1
1
? ? 12 = tr A? 14 A + A
? ? A 12 A
? 12 ? A
? 21 A 21 A? 14 =
tr A 2 ? 2A? 2 + AA
1
1
1 2
1
1
1
2
1
tr A? 4 A 2 ? A? 2 A? 4 =
A? 4 A 2 ? A? 2
? 0 ,
(7)
F
1
? ? 12 ?
by which we have the following inequality tr A? 2 ? 21 tr AA
alent to Eq. (6) and completes the proof of Lemma 2.
1
2
1
tr A 2 , which is equiv
Now we prove the convergence of Algorithm 1, which is summarized by the following theorem.
Theorem 1 Algorithm 1 monotonically decreases the objective of the problem in Eq. (3) in each
iteration, and converges to the globally optimal solution.
?t . We also denote the
Proof : In Algorithm 1, we denote the updated Bt in each iteration as B
PT
(g)
T (g)
2
least square loss in the g-th iteration as L = t=1 ||Xt Bt ? Yt ||F . According to Step 3 of
Algorithm 1 we know that the following inequality holds:
L(g+1) + ?
T
X
t=1
L
(g)
+?
T
X
t=1
tr
T
T
X
X
?tT DB
?t + ?
?tT D
?B
?t + ?
?t D
?B
?tT ?
tr B
tr B
tr B
t=1
BtT DBt
+?
T
X
t=1
tr
t=1
? t
BtT DB
+?
T
X
tr
t=1
? tT
Bt DB
(8)
.
?(1) , and the updated B(2) as B
?(1) , from Eq. (8) we can derive:
Denote the updated B(1) as B
T
T ?
T ?
?(1)
?(1) + ? tr B
?(1) B
?(1)
?(2) B
?(2)
L(g+1) + ? tr B
DB
D + ? tr B
D ?
L(g) + ?
T
X
t=1
T
T
X
X
T
T ?
T ?
tr B(1)
DB(1) + ?
tr B(1) B(1)
D +?
tr B(2) B(2)
D .
t=1
(9)
t=1
? and D,
? we have:
According to the definitions of D, D
d PT
(g+1),k 2
? 1 ?
? 1
||2
? X t=1 ||bt
?
2
2
T
T
T
T
?(1) B
?(1)
?
?
q
+
tr
B
B
B
+
tr
B
B
B
B
?
(1) (1)
(2) (2)
(2) (2)
PT
2
2
2
(g),k 2
k=1
||b
||
t
2
t=1
PT
d
(g),k 2
? 1 ?
? 1
X
||b
||2
?
?
2
2
T
T
T
T
q t=1 t
L(g) +
+
tr
B
B
B
B
+
tr
B
B
B
B
.
(1) (1)
(1) (1)
(1) (1)
(2) (2)
2 k=1 PT
2
2
(g),k 2
||b
||
t
2
t=1
(10)
L(g+1) +
Then according to Lemma 1 and Lemma 2, the following three inequalities hold:
v
v
u T
u T
PT
PT
(g+1),k 2
(g),k 2
uX (g+1),k
uX (g),k
||b
||
||2
2
t
t=1
t=1 ||bt
t
||bt
||22 ? qP
?t
||bt
||22 ? qP
.
(g),k 2
(g),k 2
T
T
t=1
t=1
2
||b
||
2
||b
||
t
t
2
2
t=1
t=1
5
(11)
? 1
? 1
1 ? ?T
1
2
2
T
T
T
T
T
?(1) B
?(1)
tr B
? tr
B(1) B(1) B(1) B(1)
? tr B(1) B(1)
? tr
B(1) B(1)
B(1) B(1)
,
2
2
(12)
? 1
? 1
1
1
2
2
T
T
T
T
T
T
?(2) B
?(2) ? tr
?(2) B
?(2) B(2) B(2)
tr B
B
? tr B(2) B(2) ? tr
B(2) B(2) B(2) B(2)
.
2
2
(13)
Adding the both sides of of Eqs. (10?13) together, we can obtain:
v
d u
T
uX
X
(g+1),k 2
t
?(1) B
? T + ? tr B
?(2) B
?T ?
L(g+1) + ?
||bt
||2 + ? tr B
(1)
(2)
k=1
t=1
v
T
d u
uX
X
(g),k 2
t
L(g+1) + ?
||b
|| + ? tr B
t
k=1
2
t=1
T
(1) B(1)
T
+ ? tr B(2) B(2)
(14)
Thus, our algorithm decreases the objective value of Eq. (3) in each iteration. When the objective
value keeps unchange, Eq. (4) is satisfied, i.e., the K.K.T. condition of the objective is satisfied.
Thus, our algorithm reaches one of the optimal solutions. Because the objective in Eq. (3) is a
convex problem, Algorithm 1 will converge to one of the globally optimal solution.
3 Experiments
We evaluate the proposed method by applying it to the Alzheimer?s Disease Neuroimaging Initiative
(ADNI) cohort to examine the association between a wide range of imaging measures and two types
of cognitive measures over a certain period of time. Our goal is to discover a compact set of imaging
markers that are closely related to cognitive trajectories.
Imaging markers and cognitive measures. Data used in this work were obtained from the ADNI
database (adni.loni.ucla.edu). One goal of ADNI has been to test whether serial MRI, PET,
other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of Mild Cognitive Impairment (MCI) and early AD. For up-to-date information,
see www.adni-info.org. We downloaded 1.5 T MRI scans and demographic information for
821 ADNI-1 participants. We performed voxel-based morphometry (VBM) on the MRI data by
following [8], and extracted mean modulated gray matter (GM) measures for 90 target regions of
interest (ROIs) (see Figure 3 for the ROI list and detailed definitions of these ROIs in [3]). These
measures were adjusted for the baseline intracranial volume (ICV) using the regression weights derived from the healthy control (HC) participants at the baseline. We also downloaded the longitudinal
scores of the participants in two independent cognitive assessments including Fluency Test and Rey?s
Auditory Verbal Learning Test (RAVLT). The details of these cognitive assessments can be found
in the ADNI procedure manuals2 . The time points examined in this study for both imaging markers
and cognitive assessments included baseline (BL), Month 6 (M6), Month 12 (M12) and Month 24
(M24). All the participants with no missing BL/M6/M12/M24 MRI measurements and cognitive
measures were included in this study. A total of 417 subjects were involved in our study, including
84 AD, and 191 MCI and 142 HC participants. We examined 3 RAVLT scores RAVLT TOTAL,
RAVLT TOT6 and RAVLT RECOG, and 2 Fluency scores FLU ANIM and FLU VEG.
3.1 Improved Cognitive Score Prediction from Longitudinal Imaging Markers
We first evaluate the proposed method by applying it to the ADNI cohort for predicting the two types
of cognitive scores using the VBM markers, tracked over four different time points. Our goal in this
experiment is to improve the prediction performance.
Experimental setting. We compare the proposed method against its two close counterparts including multivariate linear regression (LR) and ridge regression (RR). LR is the simplest and widely
used regression model in statistical learning and brain image analysis. RR is a regularized version
of LR to avoid over-fitting. Due to their mathematical nature, these two methods are performed for
2
http://www.adni-info.org/Scientists/ProceduresManuals.aspx
6
Table 1: Performance comparison for memory score prediction measured by RMSE.
RAVLT
Fluency
LR
RR
TGL
Ours (?2,1 -norm only)
Ours (trace norm only)
Ours
0.380
0.171
0.341
0.165
0.318
0.155
0.306
0.144
0.301
0.147
0.283
0.135
each cognitive measure at each time point separately, and thus they cannot make use of the temporal
correlation. We also compare our method to a recent longitudinal method, called as Temporal Group
Lasso Multi-Task Regression (TGL) [9]. TGL takes into account the longitudinal property of the
data, which, however, is designed to analyze only one single memory score at a time. In contrast,
besides imposing structured sparsity via tensor ?2,1 -norm regularization for imaging marker selection, our new method also imposes two trace norm regularizations to capture the interrelationships
among different cognitive measures over the temporal dimension. Thus, the proposed method is
able to perform association study for all the relevant scores of a cognitive test at the same time, e.g.,
our method can simultaneously deal with the three RAVLT scores, or the two Fluency scores.
To evaluate the usefulness of each component of the proposed method, we implement three versions
of our method as follows. First, we only impose the ?2,1 -norm regularization on the unfolded coefficient tensor B along the feature mode, denoted as ??2,1 -norm only?. Second, we only impose
the trace norm regularizations on the two coefficient matrices unfolded from the coefficient tensor B
along the feature and task modes respectively, denoted as ?trace norm only?. Finally, we implement
the full version of our new method that solves the proposed objective in Eq. (3). Note that, if no
regularization is imposed, our method is degenerated to the traditional LR method.
To measure prediction performance, we use standard 5-fold cross-validation strategy by computing
the root mean square error (RMSE) between the predicted and actual values of the cognitive scores
on the testing data only. Specifically, the whole set of subjects are equally and randomly partitioned
into five subsets, and each time the subjects within one subset are selected as the testing samples
and all other subjects in the remaining four subsets are used for training the regression models. This
process is repeated for five times and average results are reported in Table 1. To treat all regression
tasks equally, data for each response variable is normalized to have zero mean and unit variance.
Experimental results. From Table 1 we can see that the proposed method is consistently better than
the three competing methods, which can be attributed to the following reasons. First, because LR
and RR methods by nature can only deal with one individual cognitive measure at one single time
point at a time, they cannot benefit from the correlations across different cognitive measures over the
entire time course. Second, although TGL method improves the previous two methods in that it does
take into account longitudinal data patterns, it still assumes all the test scores (i.e., learning tasks)
from one cognitive assessment to be independent, which, though, is not true in reality. For example,
it is well known that [3, 4] in RAVLT assessment, the total number of words remembered by the
participants in the first 5 learning trials (RAVLT TOTAL) heavily impacts the total number of words
which can be recalled in the 6th learning trial (RAVLT TOT6), and the results of these two measures
both partially determines the final recognition rate after 30 minutes delay (RAVLT RECOG). In
contrast, our new method considers all c learning tasks (c = 3 for RAVLT assessment and c =
2 for Fluency assessment) as an integral learning object as formulated in Eq. (3), such that their
correlations can be incorporated by the two imposed low-rank regularization terms.
Besides, we also observe that the two degenerated versions of the proposed method do not perform as
well as their full version counterpart, which provides a concrete evidence to support the necessities of
the component terms of our learning objective in Eq. (3) and justifies our motivation to impose ?2,1 norm regularization for feature selection and trace norm regularization to capture task correlations.
3.2 Identification of Longitudinal Imaging Markers
Because one of the primary goals of our regression analysis is to identify a subset of imaging markers
which are highly correlated to the AD progression reflected by the cognitive changes over time.
Therefore, we examine the imaging markers identified by the proposed methods with respect to the
longitudinal changes encoded by the cognitive scores recorded at the four consecutive time points.
7
0.006
BL
0.005
M6
0.004
0.003
M12
0.002
0.001
LAmygdala
RAmygdala
LAngular
RAngular
LCalcarine
RCalcarine
LCaudate
RCaudate
LAntCingulate
RAntCingulate
LMidCingulate
RMidCingulate
LPostCingulate
RPostCingulate
LCuneus
RCuneus
LInfFrontal_Oper
RInfFrontal_Oper
LInfOrbFrontal
RInfOrbFrontal
LInfFrontal_Triang
RInfFrontal_Triang
LMedOrbFrontal
RMedOrbFrontal
LMidFrontal
RMidFrontal
LMidOrbFrontal
RMidOrbFrontal
LSupFrontal
RSupFrontal
LMedSupFrontal
RMedSupFrontal
LSupOrbFrontal
RSupOrbFrontal
LFusiform
RFusiform
LHeschl
RHeschl
LHippocampus
RHippocampus
LInsula
RInsula
LLingual
RLingual
LInfOccipital
RInfOccipital
LMidOccipital
RMidOccipital
LSupOccipital
RSupOccipital
LOlfactory
ROlfactory
LPallidum
RPallidum
LParahipp
RParahipp
LParacentral
RParacentral
LInfParietal
RInfParietal
LSupParietal
RSupParietal
LPostcentral
RPostcentral
LPrecentral
RPrecentral
LPrecuneus
RPrecuneus
LPutamen
RPutamen
LRectus
RRectus
LRolandic_Oper
RRolandic_Oper
LSuppMotorArea
RSuppMotorArea
LSupramarg
RSupramarg
LInfTemporal
RInfTemporal
LMidTemporal
RMidTemporal
LMidTempPole
RMidTempPole
LSupTempPole
RSupTempPole
LSupTemporal
RSupTemporal
LThalamus
RThalamus
M24
Figure 3: Top panel: Average regression weights of imaging markers for predicting three RAVLT
memory scores. Bottom panel: Top 10 average weights mapped onto the brain.
Shown in Figure 3 are (1) the heat map of the learned weights (magnitudes of the average regression
weights for all three RAVLT scores at each time point) of the VBM measures at different time points
calculated by our method; and (2) the top 10 weights mapped onto the brain anatomy. A first glance
at the heat map in Figure 3 indicates that the selected imaging markers have clear patterns that span
across all the four studied time points, which demonstrates that these markers are longitudinally
stable and thereby can potentially serve as screening targets over the course of AD progression.
Moreover, we observe that the bilateral hippocampi and parahippocampal gyri are among the top
selected features. These findings are in accordance with the known knowledge that in the pathological pathway of AD, medial temporal lobe is firstly affected, followed by progressive neocortical
damage [19, 20]. Evidence of a significant atrophy of middle temporal region in AD patients has
also been observed in previous studies [21, 22, 23].
In summary, the identified longitudinally stable imaging markers are highly suggestive and strongly
agree with the existing research findings, which warrants the correctness of the discovered imagingcognition associations to reveal the complex relationships between MRI measures and cognitive
scores. This is important for both theoretical research and clinical practices for a better understanding of AD mechanism.
4 Conclusion
To reveal the relationship between longitudinal cognitive measures and neuroimaging markers, we
have proposed a novel high-order multi-task feature learning model, which selects the longitudinal
imaging markers that can accurately predict cognitive measures at all the time points. As a result,
these imaging markers could fully differentiate the entire longitudinal trajectory of relevant cognitive
measures and better capture the associations between imaging markers and cognitive changes over
time. To solve our new objective, which uses the non-smooth structured sparsity-inducing norms,
we have derived an iterative algorithm with a closed form solution in each iteration. We have further
proved our algorithm converges to the global optimal solution. The validations using ADNI imaging
and cognitive data have demonstrated the promise of our method.
Acknowledgement. This work was supported by NSF CCF-0830780, CCF-0917274, DMS0915228, and IIS-1117965 at UTA; and by NSF IIS-1117335, NIH R01 LM011360, UL1
RR025761, U01 AG024904, RC2 AG036535, R01 AG19771, and P30 AG10133-18S1 at IU. Data
used in the work were obtained from the ADNI database. ADNI funding information is available at
http://adni.loni.ucla.edu/wp-content/uploads/how to apply/ADNI DSP Policy.pdf.
8
References
[1] C Hinrichs, V Singh, G Xu, SC Johnson, and ADNI. Predictive markers for ad in a multi-modality
framework: an analysis of mci progression in the adni population. Neuroimage, 55(2):574?89, 2011.
[2] CM Stonnington, C Chu, S Kloppel, and et al. Predicting clinical scores from magnetic resonance scans
in alzheimer?s disease. Neuroimage, 51(4):1405?13, 2010.
[3] L. Shen, S. Kim, and et al. Whole genome association study of brain-wide imaging phenotypes for
identifying quantitative trait loci in MCI and AD: A study of the ADNI cohort. Neuroimage, 2010.
[4] H. Wang, F. Nie, H. Huang, S. Risacher, C. Ding, A.J. Saykin, L. Shen, et al. Sparse multi-task regression
and feature selection to identify brain imaging predictors for memory performance. In ICCV, 2011.
[5] D. Zhang and D. Shen. Multi-modal multi-task learning for joint prediction of multiple regression and
classification variables in alzheimer?s disease. Neuroimage, 2011.
[6] H. Wang, F. Nie, H. Huang, S. Kim, Nho K., S. Risacher, A. Saykin, and L. Shen. Identifying Quantitative
Trait Loci via Group-Sparse Multi-Task Regression and Feature Selection: An Imaging Genetics Study
of the ADNI Cohort. Bioinformatics, 28(2):229?237, 2012.
[7] H. Wang, F. Nie, H. Huang, S. Risacher, A. Saykin, and L. Shen. Identifying Disease Sensitive and
Quantitative Trait Relevant Biomarkers from Multi-Dimensional Heterogeneous Imaging Genetics Data
via Sparse Multi-Modal Multi-Task Learning. Bioinformatics, 28(18):i127?i136, 2012.
[8] S. L. Risacher, L. Shen, J. D. West, S. Kim, B. C. McDonald, L. A. Beckett, D. J. Harvey, Jr. Jack, C. R.,
M. W. Weiner, A. J. Saykin, and ADNI. Longitudinal MRI atrophy biomarkers: relationship to conversion
in the ADNI cohort. Neurobiol Aging, 31(8):1401?18, 2010.
[9] J. Zhou, L. Yuan, J. Liu, and J. Ye. A multi-task learning formulation for predicting disease progression.
In SIGKDD, 2011.
[10] H. Wang, F. Nie, H. Huang, J. Yan, S. Kim, Nho K., S. Risacher, A. Saykin, and L. Shen. From Phenotype
to Genotype: An Association Study of Candidate Phenotypic Markers to Alzheimer?s Disease Relevant
SNPs. Bioinformatics, 28(12):i619?i625, 2012.
[11] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. NIPS, pages 41?48, 2007.
[12] G. Obozinski, B. Taskar, and M. Jordan. Multi-task feature selection. Technical report, Department of
Statistics, University of California, Berkeley, 2006.
[13] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of The
Royal Statistical Society Series B, 68(1):49C?67, 2006.
[14] H. Wang, F. Nie, H. Huang, S. Risacher, A. Saykin, and L. Shen. Identifying ad-sensitive and cognitionrelevant imaging biomarkers via joint classification and regression. Medical Image Computing and
Computer-Assisted Intervention (MICCAI 2011), pages 115?123, 2011.
[15] B. Recht, M. Fazel, and P.A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. Arxiv preprint arxiv:0706.4138, 2007.
[16] E.J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717?772, 2009.
[17] E.J. Candes and T. Tao. The power of convex relaxation: Near-optimal matrix completion. Information
Theory, IEEE Transactions on, 56(5):2053?2080, 2010.
[18] I.F. Gorodnitsky and B.D. Rao. Sparse signal reconstruction from limited data using focuss: A reweighted minimum norm algorithm. Signal Processing, IEEE Transactions on, 45(3):600?616, 1997.
[19] H. Braak and E. Braak. Neuropathological stageing of alzheimer-related changes. Acta neuropathologica,
82(4):239?259, 1991.
[20] A. Delacourte, JP David, N. Sergeant, L. Buee, A. Wattez, P. Vermersch, F. Ghozali, C. Fallet-Bianco,
F. Pasquier, F. Lebert, et al. The biochemical pathway of neurofibrillary degeneration in aging and
alzheimers disease. Neurology, 52(6):1158?1158, 1999.
[21] L.G. Apostolova, P.H. Lu, S. Rogers, R.A. Dutton, K.M. Hayashi, A.W. Toga, J.L. Cummings, and
P.M. Thompson. 3d mapping of mini-mental state examination performance in clinical and preclinical
alzheimer disease. Alzheimer Disease & Associated Disorders, 20(4):224, 2006.
[22] A. Convit, J. De Asis, MJ De Leon, CY Tarshish, S. De Santi, and H. Rusinek. Atrophy of the medial occipitotemporal, inferior, and middle temporal gyri in non-demented elderly predict decline to Alzheimer?s
disease. Neurobiol of aging, 21(1):19?26, 2000.
[23] V. Julkunen, E. Niskanen, S. Muehlboeck, M. Pihlajam?aki, M. K?on?onen, M. Hallikainen, M. Kivipelto,
S. Tervo, R. Vanninen, A. Evans, et al. Cortical thickness analysis to detect progressive mild cognitive
impairment: a reference to alzheimer?s disease. Dementia and geriatric cognitive disorders, 28(5):404?
412, 2009.
9
| 4502 |@word mild:2 trial:4 version:5 mri:7 middle:4 norm:23 hippocampus:1 km:5 bn:1 lobe:1 thereby:1 tr:42 necessity:1 liu:1 series:1 score:26 ours:3 longitudinal:34 existing:4 com:1 gmail:1 yet:1 chu:1 evans:1 j1:4 enables:1 plot:1 designed:1 update:1 medial:2 selected:5 lr:6 mental:1 provides:1 org:2 firstly:1 zhang:1 five:2 mathematical:1 along:7 become:1 ik:2 initiative:2 yuan:2 prove:4 fitting:1 pathway:2 introduce:2 elderly:1 indeed:1 longitudinally:2 cand:1 examine:2 multi:24 brain:6 globally:2 unfolded:6 actual:1 becomes:1 provided:1 discover:1 underlying:1 moreover:1 panel:5 cm:1 neurobiol:2 minimizes:1 finding:2 indiana:1 temporal:11 quantitative:3 berkeley:1 alent:1 k2:5 demonstrates:1 control:1 unit:1 medical:1 intervention:1 positive:2 engineering:1 scientist:1 treat:1 accordance:1 limit:1 aging:3 despite:1 flu:2 fb1:1 acta:1 studied:3 examined:2 challenging:1 limited:1 range:1 neuropsychological:1 fazel:1 testing:2 practice:1 definite:2 implement:2 procedure:1 b1t:3 pontil:1 j0:2 unfold:2 empirical:1 yan:2 neurodegenerative:3 uploads:2 word:4 regular:1 dbt:2 cannot:2 close:1 selection:9 onto:2 parahippocampal:1 influence:1 writing:1 applying:2 tangled:1 www:2 equivalent:1 imposed:2 map:2 yt:9 missing:1 demonstrated:1 convex:5 thompson:1 shen:9 disorder:3 identifying:4 nuclear:1 population:1 variation:1 updated:3 pt:8 gm:1 heavily:2 target:2 exact:1 us:1 associate:1 element:1 recognition:2 utilized:1 database:3 icv:1 bottom:1 recog:2 observed:1 taskar:1 ding:1 wang:6 capture:6 solved:1 calculate:4 cy:1 region:3 degeneration:1 connected:1 demented:1 decrease:2 disease:18 nie:6 iupui:1 kmi:2 dynamic:1 highorder:1 singh:1 solving:3 predictive:3 serve:1 easily:1 joint:2 tx:1 separated:1 distinct:1 heat:2 sc:1 outcome:2 encoded:1 widely:3 solve:5 statistic:1 radiology:1 jointly:1 final:2 obviously:1 differentiate:2 differentiable:2 rr:4 propose:4 reconstruction:1 j2:5 relevant:7 date:1 achieve:1 inducing:5 convergence:6 optimum:1 converges:3 object:1 tim:1 derive:4 andrew:1 develop:2 completion:2 measured:1 fluency:5 school:1 eq:23 solves:1 predicted:1 direction:1 anatomy:1 closely:1 kbt:1 human:1 rogers:1 biological:1 equiv:1 gorodnitsky:1 mathematically:1 adjusted:1 extension:1 assisted:1 hold:5 roi:3 mapping:1 predict:6 achieves:1 consecutive:5 nitive:1 early:1 estimation:1 label:1 healthy:1 sensitive:2 grouped:1 correctness:1 tool:1 unfolding:1 minimization:1 aim:1 avoid:1 zhou:1 derived:2 focus:2 dsp:1 consistently:1 rank:7 indicates:1 feipingnie:1 contrast:2 sigkdd:1 skipped:1 kim:5 baseline:3 detect:1 dependent:1 biochemical:1 typically:2 entire:3 bt:29 i1:1 selects:1 tao:1 iu:1 issue:1 aforementioned:1 among:5 classification:2 denoted:2 resonance:1 integration:1 initialize:1 neuropathological:1 evgeniou:1 progressive:3 look:1 warrant:1 btt:5 report:2 simplify:1 employ:1 randomly:1 pathological:1 simultaneously:2 uta:2 individual:4 replacement:1 interest:1 screening:1 highly:2 genotype:1 integral:1 necessary:1 alzheimers:1 decoupled:2 re:1 theoretical:1 earlier:1 rao:1 introducing:1 subset:4 predictor:1 usefulness:1 delay:2 inadequate:1 johnson:1 reported:1 thickness:1 combined:1 recht:2 explores:1 sequel:1 together:4 concrete:1 again:1 satisfied:2 recorded:1 huang:6 cognitive:59 derivative:1 pmin:1 li:1 account:3 parrilo:1 de:3 summarized:4 b2:2 u01:1 coefficient:11 matter:1 toga:1 ad:19 performed:3 root:1 bilateral:1 closed:3 analyze:2 apparently:2 red:1 competitive:1 participant:7 capability:1 candes:1 slope:1 rmse:2 cxt:1 minimize:2 square:2 accuracy:2 variance:1 efficiently:2 listing:1 t3:1 identify:9 identification:2 accurately:2 overlook:1 lu:1 trajectory:11 preprint:1 reach:1 definition:2 against:1 involved:1 obvious:1 associated:2 proof:5 mi:1 attributed:1 couple:1 auditory:1 proved:1 treatment:1 knowledge:2 indianapolis:1 subsection:1 improves:1 m24:3 asis:1 cummings:1 arlington:2 reflected:1 response:2 improved:1 modal:2 loni:4 ag024904:1 formulation:1 though:2 strongly:1 stage:1 miccai:1 correlation:20 until:1 marker:35 assessment:10 glance:1 mode:7 reveal:4 gray:1 feiping:1 effect:1 ye:1 k22:2 y2:2 normalized:1 counterpart:2 true:1 regularization:11 equality:2 ccf:2 iteratively:1 wp:1 illustrated:3 deal:3 reweighted:2 during:3 inferior:1 aki:1 pdf:2 complete:4 tt:4 ridge:1 neocortical:1 mcdonald:1 interrelationship:1 snp:1 image:2 jack:1 novel:3 funding:1 nih:1 common:1 qp:2 tracked:1 jp:1 volume:1 association:16 discussed:1 trait:3 measurement:4 significant:1 imposing:1 rd:5 mathematics:1 stable:2 multivariate:2 intracranial:1 recent:1 certain:3 harvey:1 inequality:4 remembered:2 seen:1 minimum:2 additional:2 geriatric:1 impose:4 converge:1 period:1 monotonically:1 signal:2 semi:2 ii:2 multiple:1 full:2 ul1:1 infer:2 smooth:5 technical:1 adni:30 characterized:1 cross:2 clinical:4 lin:1 serial:1 equally:2 schematic:1 prediction:9 impact:2 regression:24 sylvester:2 patient:1 heterogeneous:1 arxiv:2 iteration:6 morphometry:2 separately:2 completes:1 singular:1 modality:1 subject:6 tend:1 db:6 jordan:1 call:2 alzheimer:13 beckett:1 leverage:1 near:1 cohort:7 m6:3 affect:1 lasso:1 competing:1 identified:2 idea:1 decline:1 texas:1 t0:1 whether:3 biomarkers:3 weiner:1 speaking:1 rey:1 impairment:3 useful:2 detailed:1 clear:1 rc2:1 simplest:1 reduced:1 http:3 gyrus:2 nsf:2 blue:1 diagnosis:1 write:1 promise:1 affected:1 group:5 four:4 p30:1 phenotypic:4 imaging:45 relaxation:1 powerful:1 fourth:1 tgl:4 preclinical:1 followed:1 distinguish:1 guaranteed:1 fold:1 yielded:1 x2:3 ucla:4 min:4 span:1 leon:1 department:3 structured:4 according:3 jr:1 beneficial:1 across:3 partitioned:1 s1:1 iccv:1 equation:3 visualization:1 agree:1 mechanism:2 know:3 locus:2 demographic:1 available:2 operation:1 apply:4 progression:11 observe:2 enforce:1 magnetic:1 denotes:1 remaining:1 include:1 risacher:7 assumes:1 top:4 maintaining:1 bkt:3 atrophy:3 medicine:1 exploit:1 build:1 society:1 r01:2 bl:3 tensor:14 objective:16 strategy:2 primary:1 damage:1 traditional:2 diagonal:3 unclear:1 unable:1 mapped:2 bianco:1 participate:1 topic:2 considers:1 trivial:1 reason:2 pet:1 degenerated:2 besides:2 relationship:3 mini:1 onen:1 difficult:2 neuroimaging:9 potentially:1 relate:1 info:2 trace:8 design:1 implementation:1 policy:1 contributed:1 perform:2 conversion:1 m12:3 incorporated:1 y1:2 rn:3 perturbation:2 stack:1 discovered:1 vbm:4 introduced:1 david:1 recalled:2 california:1 learned:3 nip:1 address:3 able:1 pattern:5 sparsity:8 including:3 memory:6 royal:1 power:3 overlap:1 examination:1 regularized:1 predicting:5 improve:2 identifies:1 prior:2 nice:1 acknowledgement:2 understanding:1 xtt:6 fully:2 expect:1 dxt:1 loss:1 validation:2 foundation:1 integrate:2 downloaded:2 conveyed:1 rik:1 sufficient:1 consistent:1 imposes:1 article:1 heng:2 veg:1 row:2 course:2 summary:1 genetics:2 repeat:1 supported:1 verbal:1 side:1 understand:1 wide:2 characterizing:1 taking:1 saykin:7 sparse:5 benefit:1 dimension:4 calculated:1 cortical:1 genome:1 voxel:2 transaction:2 compact:1 implicitly:1 status:2 keep:1 global:5 suggestive:1 b1:6 braak:2 neurology:1 latent:1 iterative:2 table:3 reality:1 promising:1 learn:3 nature:2 dutton:1 mj:1 hc:2 complex:1 hinrichs:1 did:1 whole:2 motivation:1 repeated:1 x1:2 xu:1 west:1 neuroimage:4 kbk22:1 candidate:1 third:2 minute:2 theorem:2 specific:1 xt:8 dementia:1 explored:2 list:2 evidence:2 adding:1 magnitude:1 justifies:1 aspx:1 easier:1 phenotype:2 interrelated:3 ux:5 partially:2 hayashi:1 hua:1 aa:6 determines:2 extracted:1 obozinski:1 goal:6 month:3 formulated:1 replace:2 content:1 change:10 included:2 specifically:1 lemma:7 anim:1 total:7 called:1 mci:4 experimental:2 e:1 shannon:1 select:2 support:1 scan:2 modulated:1 bioinformatics:3 preparation:1 investigator:2 evaluate:3 argyriou:1 correlated:3 |
3,870 | 4,503 | The Lov?asz ? function, SVMs and finding large dense
subgraphs
Vinay Jethava ?
Computer Science & Engineering Department,
Chalmers University of Technology
412 96, Goteborg, SWEDEN
[email protected]
Chiranjib Bhattacharyya
Department of CSA,
Indian Institute of Science
Bangalore, 560012, INDIA
[email protected]
Anders Martinsson
Department of Mathematics,
Chalmers University of Technology
412 96, Goteborg, SWEDEN
[email protected]
Devdatt Dubhashi
Computer Science & Engineering Department,
Chalmers University of Technology
412 96, Goteborg, SWEDEN
[email protected]
Abstract
The Lov?asz ? function of a graph, a fundamental tool in combinatorial optimization and approximation algorithms, is computed by solving a SDP. In this paper
we establish that the Lov?asz ? function is equivalent to a kernel learning problem
related to one class SVM. This interesting connection opens up many opportunities bridging graph theoretic algorithms and machine learning. We show that there
exist graphs, which we call SVM ? ? graphs, on which the Lov?asz ? function
can be approximated well by a one-class SVM. This leads to novel use of SVM
techniques for solving algorithmic
problems in large graphs e.g. identifying a
?
planted clique of size ?( n) in a random graph G(n, 21 ). A classic approach for
this problem involves computing the ? function, however it is not scalable due
to SDP computation. We show that the random graph with a planted clique is an
example of SVM ? ? graph. As a consequence a SVM based approach easily
identifies the clique in large graphs and is competitive with the state-of-the-art.
We introduce the notion of common orthogonal labelling and show that it can be
computed by solving a Multiple Kernel learning problem. It is further shown that
such a labelling is extremely useful in identifying a large common dense subgraph
in multiple graphs, which is known to be a computationally difficult problem. The
proposed algorithm achieves an order of magnitude scalability compared to state
of the art methods.
1
Introduction
The Lov?asz ? function [19] plays a fundamental role in modern combinatorial optimization and
in various approximation algorithms on graphs, indeed Goemans was led to say It seems all
roads lead to ? [10]. The function is an instance of semidefinite programming(SDP) and
hence computing it is an extremely demanding task even for moderately sized graphs. In this paper
we establish that the ? function is equivalent to solving a kernel learning problem in the one-class
SVM setting. This surprising connection opens up many opportunities which can benefit both graph
theory and machine learning. In this paper we exploit this novel connection to show an interesting
application of the SVM setup for identfying large dense subgraphs. More specifically we make the
following contributions.
?
Relevant code and datasets
jethava/svm-theta.html
can
be
found
1
on
http://www.cse.chalmers.se/ e
1.1
Contributions:
1.We give a new SDP characterization of Lov?asz ? function,
min ?(K) = ?(G)
K?K(G)
where ?(K) is computed by solving an one-class SVM. The matrix K is a kernel matrix, associated
with any orthogonal labelling of G. This is discussed in Section 2.
2. Using an easy to compute orthogonal labelling we show that there exist graphs, which we call
SVM ? ? graphs, on which Lov?asz ? function can be well approximated by solving an one-class
SVM. This is discussed in Section 3.
3. The problem of finding a large common dense subgraph in multiple graphs arises in a variety
of domains including Biology, Internet, Social Sciences [18]. Existing state-of-the-art methods
[14] are enumerative in nature and has complexity exponential in the size of the subgraph. We
introduce the notion of common orthogonal labelling which can be used to develop a formulation
which is close in spirit to a Multiple Kernel Learning based formulation. Our results on the well
known DIMACS benchmark dataset show that it can identify large common dense subgraphs in
wide variety of settings, beyond the realm of state-of-the-art methods. This is discussed in Section
4.
4. Lastly, in Section 5, we show that the famous planted clique problem, can be easily solved for
large graphs by solving an one-class SVM. Many problems of interest in the area of machine learning
can be reduced to the problem of detecting planted clique, e.g detecting correlations [1, section 4.6],
correlation clustering [21] etc. The planted clique problem consists of identifying a large clique in
a random graph. There is an elegant approach for identifying the planted clique by computing the
Lov?asz ? function [8], however it is not practical for large graphs as it requires solving an SDP.
We show that the graph associated with the planted clique problem is a SVM ? ? graph, paving
the way for identifying the clique by solving an one-class SVM. Apart from the method based on
computing the ? function, there are other methods for planted clique identification, which do not
require solving an SDP [2, 7, 24]. Our result is also competitive with the state-of-the-art non-SDP
based approaches [24].
Notation We denote the Euclidean norm by k ? k and the infinity norm by k ? k? . Let S d?1 =
{u ? Rd | kuk = 1} denote a d dimensional sphere. Let Sn denote the set of n?n square symmetric
matrices and S+
n denote n ? n square symmetric positive semidefinite matrices. For any A ? Sn
we denote the eigenvalues ?1 (A) ? . . . ? ?n (A). diag(r) will denote a diagonal matrix with
diagonal entries defined by components of r. We denote the one-class SVM objective function by
!
n
n
X
X
?i ?j Kij
?i ?
?(K) =
max
2
(1)
?i ?0,i=1,...,n
i=1
|
i=1
{z
f (?;K)
}
where K ? S+
n . Let G = (V, E) be a graph on vertices V = {1, . . . , n} and edge set E. Let
A ? Sn denote the adjacency matrix of G where Aij = 1 if edge (i, j) ? E, and 0 otherwise. An
? denote the
eigenvalue of graph G would mean the eigenvalue of the adjacency matrix of G. Let G
? = ee> ? I ? A, where e = [1, 1, . . . , 1]>
? is A
complement graph of G. The adjacency matrix of G
is a vector of length n containing all 1?s, and I denotes the identity matrix. Let GS = (S, ES ) denote
the subgraph induced by S ? V in graph G; having density ?(GS ) is given by ?(GS ) = |ES |/ |S|
2 .
Let Ni (G) = {j ? V : (i, j) ? E} denote the set of neighbours of vertex i in graph G, and degree
? is a subset of vertices
of node i to be di (G) = |Ni (G)|. An independent set in G (a clique in G
? The notation is standard e.g.
S ? V for which no (every) pair of vertices has an edge in G (in G).
see [3].
2
Lov?asz ? function and Kernel learning
Consider the problem of embedding a graph G = (V, E) on a d dimensional unit sphere S d?1 . The
study of this problem was initiated in [19] which introduced the idea of orthogonal labelling: An
2
orthogonal labelling of graph G = (V, E) with |V | = n, is a matrix U = [u1 , . . . , un ] ? Rd?n such
d?1
that u>
? i = 1, . . . , n.
i uj = 0 whenever (i, j) 6? E and ui ? S
An orthogonal labelling defines an embedding of a graph on a d dimensional unit sphere: for every
vertex i there is a vector ui on the unit sphere and for every (i, j) 6? E ui and uj are orthogonal.
Using the notion of orthogonal labellings, [19] defined a function, famously known as Lov?asz ?
function, which upper bounds the size of maximum independent set. More specifically
for any graph G : ALPHA(G) ? ?(G),
where ALPHA(G) is the size of the largest independent set. Finding large independent sets is
a fundamental problem in algorithm design and analysis and computing ALPHA(G) is a classic
NP-hard problem which is even very hard even to approximate [11]. However, the Lov?asz function
?(G) gives a tractable upper-bound and since then Lov?asz ? function has been extensively used
in solving a variety of algorithmic problems e.g. [6]. It maybe useful to recall the definition of
Lov?asz ? function. Denote the set of all possible orthogonal labellings of G by Lab(G) = {U =
[u1 , . . . , un ]|ui ? S d?1 , u>
i uj = 0 ?(i, j) 6? E}.
?(G) =
min
min max
U?Lab(G) c?S d?1
i
1
(c> ui )2
(2)
There exist several other equivalent definitions of ?, for a comprehensive discussion see [16].
However computation of Lov?asz ? function is not practical even for moderately sized graphs as it
requires solving a semidefinite program on a matrix which is of the size of the graph. In the following
theorem, we show that there exist connections between the ? function and the SVM formulation.
Theorem 2.1. For a undirected graph G = (V, E), with |V | = n, let K(G) := {K ? S+
n | Kii =
1, i ? [n], Kij = 0, (i, j) 6? E} Then, ?(G) = minK?K(G) ?(K)
Proof. We begin by noting that any K ? K(G) is positive semidefinite and hence there exists
U ? Rd?n such that K = U> U. Note that Kij = u>
i uj where ui is a column of U. Hence by
inspection it is clear that the columns of U defines an orthogonal labelling on G, i.e U ? Lab(G).
Using a similar argument we can show that for any U ? Lab(G), the matrix K = U> U, is an
element of K(G). The set of valid kernel matrices K(G) is thus equivalent to Lab(G). Note that if
U is a labelling then U = Udiag() is also an orthogonal labelling for any > = [1 , . . . , n ], i =
?1 i = 1, . . . , n. It thus suffices to consider only those labellings for which c> ui ? 0 ? i =
1, . . . , n holds. For a fixed c one can write maxi (c>1ui )2 = mint t2 subject to c>1ui ? t. This is
true because the minimum over t is attained at maxi c>1ui . Setting w = 2tc yields the following
2
relation minc?S d?1 maxi (c>1ui )2 = minw?Rd kwk
with constraints w> ui ? 2. This establishes
4
that for a labelling, U, the optimal c is obtained by solving an one-class SVM. Application of
strong duality immediately leads to the claim minc?S d?1 maxi (c>1ui )2 = ?(K) where K = U> U
and ?(K) is defined in (1). As there is a correspondence between each element of Lab(G) and K
minimization of ?(K) over K is equivalent to computing the ?(G) function.
This is a significant result which establishes connections between two well studied formulations,
namely ? function and the SVM formulation. An important consequence of Theorem 2.1 is an
easily computable upperbound on ?(G) namely that for any graph G
ALPHA(G) ? ?(G) ? ?(K) ?K ? K(G)
(3)
Since solving ?(K) is a convex quadratic program, it is indeed a computationally efficient alternative
to the ? function. In fact we will show that there exist families of graphs for which ?(G) can be
approximated to within a constant factor by ?(K) for suitable K. Theorem 2.1 is closely related to
the following result proved in [20].
Theorem 2.2. [20] For a graph G = (V, E) with |V | = n let C ? Sn matrix with Cij = 0
whenever (i, j) 6? E. Then,
C
>
>
+I x
?(G) = minC v(G, C) = max 2x e ? x
x?0
??n (C)
3
Proof. See [20]
See that for any feasible C the matrix I + ??nC(C) ? K(G). Theorem 2.1 is a restatement of
Theorem 2.2, but has the additional advantage that the stated optimization problem can be solved
as an SDP. The optimization problem minC v(G, C) with constraints on C is not an SDP. If we
fix C = A, the adjacency matrix, we obtain a very interesting orthogonal labelling, which we will
refer to as LS labelling, introduced in [20]. Indeed there exists family of graphs, called Q graphs
for which LS labelling yields the interesting result ALPHA(G) = v(G, A), see [20]. Indeed
on a Q graph one does not need to compute a SDP, but can solve an one-class SVM, which has
obvious computational benefits. Inspired by this result, in the remaining part of the paper, we study
this labelling more closely. As a labelling is completely defined by the associated kernel matrix, we
refer to the following kernel as the LS labelling,
K=
3
A
+ I where ? ? ??n (A).
?
(4)
SVM ? ? graphs: Graphs where ? function can be approximated by SVM
We now introduce a class of graphs on which ? function can be well approximated by ?(K) for K
defined by (4). In the spirit of approximation algorithms we define:
Definition 3.1. A graph G is a SVM ? ? graph if ?(K) ? (1 + O(1))?(G) where K is a LS labelling.
Such classes of graphs are interesting because on them, one can approximate the Lov?asz ? function
by solving an SVM, instead of an SDP, which in turn can be extremely useful in the design and analysis of approximation algorithms. We will demosntrate two examples of SVM ? ? graphs namely
(a.) the Erd?os?Renyi random graph G(n, 1/2) and (b.) a planted variation. Here the relaxation
?(K) could be used in place of ?(G), resulting in algorithms with the same quality guarantees but
with faster running time ? in particular, this will allow the algorithms to be scaled to large graphs.
The classical Erd?os-Renyi random graph G(n, 1/2) has n vertices and each edge (i, j) is present
independently with probability 1/2. We list a few facts about G(n, 1/2) that will be used repeatedly.
Fact 3.1. For G(n, 1/2),
?
? With probability 1 ? O(1/n), the degree of each vertex is in the range n/2 ? O( n log n).
c
? With probability 1 ? e?n for some c >?0, the
? maximum eigenvalue is n/2 ? o(n) and the
minimum eigenvalue is in the range [? n, n] [9].
?
Theorem 3.1. Let > 2 ? 1. For G = G(n, 1/2)
? , with probability 1 ? O(1/n), ?(K) ?
?
(1 + )?(G) where K is defined in (4) with ? = 1+
n.
2
?
Proof. We begin by considering the case for ? = (1 + 2? ) n. By Fact 3.1 for all choices of ? > 0,
the minimum eigenvalue of ?1 A + I is, almost surely, greater than 0 which implies that f (?, K)
(see (1)) is strongly concave. For such functions KKT conditions are neccessary and sufficient for
optimality. The KKT conditions for a G(n, 21 ) are given by the following equation
?i +
1 X
Ai,j ?j = 1 + ?i , ?i ?i = 0, ?i ? 0
?
(5)
(i,j)?E
As A is random we begin by analyzing the case for expectation of A. Let E(A) = 12 (ee> ? I), be
? = E(A) + I is positive definite. More
the expectation of A. For the given choice of ?, the matrix K
?
? is again strongly concave and attains maximum at a KKT point. By direct
importantly f (?, K)
? where ?? = 2? satisfies
verification ?
? = ?e
n?1+2?
1
? + E(A)? = e.
?
4
(6)
Thus ?
? is the KKT point for the problem,
? =
f? = max f (?, K)
??0
n
X
?
???
?
i=1
>
E(A)
+I ?
? = n??
?
(7)
?
with the optimal objective function value f?. By choice of ? = (1 + 2? ) n we can write ?? =
2?/n + O(1/n). Using the fact about degrees of vertices in G(n, 1/2), we know that
p
n?1
a>
+ ?i with |?i | ? n log n
(8)
i e=
2
where a>
i is the ith row of the adjacency matrix A. As a consequence we note that
X
??
1
?
Aij ?
? j ? 1 = ?i
?i + ?
?
j
(9)
Recalling the definition of f and using the above equation along with (8) gives
|f (?
?; K) ? f?| ? n
??2 p
n log n
?
(10)
?
As noted before the function f (?; K) is strongly concave with ?2? f (?; K) ? 2+?
I for all feasible
?. Recalling a useful result from convex optimization, see Lemma 3.1, we obtain
1
?(K) ? f (?
?; K) ? 1 +
k?f (?
?; K)k2
(11)
?
Observing that ?f (?; K) = 2(e ? ? ? A
? ?) and using the relation between k ? k? and 2 norm along
?
??
with (9) and (8) gives k?f (?
?; K)k ? nk?f (?
? ; K)k? = 2n ?? log n. Plugging this estimate in
?
(11) and using equation (10) we obtain ?(K) ? f? + O(log n) = (2 + ?) n + O(log n) The
second
? ?
equality follows by plugging in the value of ?? in (7). It is well known [6] that ?(G) = 2 n for
?
? ?(G) + o( n) and the theorem
G(n, 12 ) with high probability. One concludes that ?(K) ? 2+?
2
follows by choice of ?.
Discussion: Theorem 3.1 establishes that instead of SDP one can solve an SVM to evaluate
? function on G(n, 1/2). Although it is well known that ALPHA(G(n, 1/2)) = 2 log n whp,
there is no known polynomial time algorithm for computing the maximum independent set. [6]
gives an approximation algorithm that finds an independent set in G(n, p) which runs in expected
polynomial time, via a computation of ?(G(n, p)),which also applies to p = 1/2. The ? function
also serves as a guarantee of the approximation which other algorithms a simple Greedy algorithm
cannot give. Theorem 3.1 allows us to obtain similar guarantees but without the computational overhead of solving an SDP. Apart from finding independent sets computing ?(G(n, p)) is also used as
a subroutine in colorability [6], and here again one can use the SVM based approach to approximate
the ? function.
Similar arguments show also that other families of graphs such as the 11 families of pseudo?random
graphs described in [17] are SVM ? ? graphs.
Lemma 3.1. [4] A function g : C ? Rd ? R is said to be strongly concave over S if there
exists t > 0 such that ?2 g(x) ?tI ? x ? C. For such functions one can show that if p? =
maxx?C g(x) < ? then
1
?x ? C p? ? g(x) ? k?g(x)k2
2t
4
Dense common subgraph detection
The problem of finding a large dense subgraph in multiple graphs has many applications [23, 22,
18]. We introduce the notion of common orthogonal labelling, and show that it is indeed possible
to recover dense regions in large graphs by solving a MKL problem. This constitutes significant
progress with respect to state of the art enumerative methods [14].
5
Problem definition Let G = {G(1) , . . . , G(M ) } be a set of simple, undirected graphs G(m) =
(V, E (m) ) defined on vertex set V = {1, . . . , n}. Find a common subgraph which is dense in all the
graphs.
Most algorithms which attempts the problem of finding a dense region are enumerative in nature and
hence do not scale well to finding large cliques. [14], first studied a related problem of finding all
possible common subgraphs for a given choice of parameters {? (1) , . . . , ? (M ) } which is atleast ?i
dense in G(i) . In the worst case, the algorithm performs depth first search over space of nnT possible
cliques of size nT . This has ?( nnT ) space and time complexity, which makes it impractical for
moderately large nT . For example, finding quasicliques of size 60 requires 8 hours (see Section 6).
In the remainder of this section, we focus on finding a large common sparse subgraph in a given
collection of graphs; with the observation that this is equivalent to finding a large common dense
subgraph in the set of complement graphs. To this end we introduce the following definition
Definition 4.1. Given simple unweighted graphs, G(m) = (V, E (m) ) m = 1, . . . , M on a common
vertex set V with |V | = n, the common orthogonal labelling on all the labellings is given by
ui ? S d?1 such that u>
/ E (m) ? m = 1, . . . , M }.1
i uj = 0 if (i, j) ?
Following the arguments of Section 2 it is immediate that size of the largest common independent
set is upper bounded by minK?L ?(K) where L = {K ? S+
: Kii = 1 ?i ? [n], Kij =
n
0 whenever (i, j) ?
/ E (m) ? m = 1, . . . , M }. We wish to exploit this fact in identifying large
common sparse regions in general graphs. Unfortunately this problem is a SDP and will not scale
well to large graphs. Taking cue from MKL literature we pose a restricted version of the problem
namely
min
?(K)
(12)
P
P
K=
(m)
M
m=1
?m K(m) , ?m ?0
M
m=1
?m =1
(m)
where K
is an orthogonal labelling of G . Direct verification shows that any feasible K is
also a common orthogonal labelling. Using the fact that ?x ? RM minpm ?0,PM
p> x =
m=1 pm =1
minm xm = max{t|xm ? t ? m = 1, . . . , M } one can recast the optimization problem in (12) as
follows
max t s.t. f (?; K(m) ) ? t ? m = 1, . . . , M
(13)
t?R,?i ?0
(m)
where K
is the LS labelling for G(m) , ?m = 1, . . . , M . The above optimization can be readily
solved by state of the art MKL solvers. This result allows us to build a parameter free common
sparse subgraph (CSS) algorithm shown in Figure 1 having following advantages: it provides a
theoretical bound on subgraph density (Claim 4.1 below); and, it requires no parameters from the
user beyond the set of graphs G(1) , . . . , G(M ) .
Let ?? be the optimal solution in (13); and SV = {i : ?i? > 0} and S1 =
{i : ?i? = 1} with
P
(m)
cardinalities nsv = |SV | and n1 = |S1 | respectively. Let ?
? min,S = mini?S
(m)
j?Ni (G
)
S
(m)
di (GS )
??
j
denote
(m)
Ni (GS )
the average of the support vector coefficients in the neighbourhood
of vertex i in induced
(m)
(m)
(m)
subgraph GS having degree di (GS ) = |Ni (GS )|. We define
(
)
(m)
(1
?
c)?
(m)
T (m) = i ? SV : di (GSV ) <
where c = min ?i?
(14)
(m)
i?SV
?
? min,SV
Claim 4.1. Let T ? V be computed as in Al(m)
gorithm 1. The subgraph GT induced by T ,
in graph G(m) , has density at most ? (m) where
(1?c)?(m)
? (m) = ?? min,SV
(nT ?1)
?? = Use MKL solvers to solve eqn. (13)
T = ?m T (m) {eqn. (14)}
Return T
Figure 1: Algorithm for finding common sparse
Pn
subgraph: T = CSS(G(1) , . . . , G(M ) )
?
Proof. (Sketch) At optimality, t =
?
.
i=1 i
P
P
P
(m) ?
?
?
This allows us to write 0 ?
i?S ?i (2 ? ?i ?
j6=i Kij ?j ) ? t as 0 ?
i?T (1 ? c ?
(m)
di (GT ) (m)
nT
?
? min,SV ) Dividing by 2 completes the proof.
?(m)
1
This is equivalent to defining an orthogonal labelling on the Union graph of G(1) , . . . , G(M )
6
5
Finding Planted Cliques in G(n, 1/2) graphs
Finding large cliques or independent sets is a computationally difficult problem even in random
graphs. While it is known that the size of the largest clique or independent set in G(n, 1/2) is 2 log n
with high probability, there is no known efficient algorithm to find a clique of size significantly larger
than log n - even a cryptographic application was suggested based on this (see the discussion and
references in the introduction of [8]).
Hidden planted clique A random graph G(n, 1/2) is chosen first and then a clique of size k is
introduced in the first 1, . . . , k vertices. The problem is to identify the clique.
?
[8] showed that if k = ?( n), then the hidden clique can be discovered in polynomial time by computing the Lov?asz ? function. There are other approaches [2, 7, 24] which do not require computing
the ? function.
We consider the (equivalent) complement model G(n, 1/2,?k) where a independent set is planted on
? 1/2, k) is a SVM ? ? graph.
the set of k vertices. We show that in the regime k = ?( n), G(n,
We will further demonstrate that as a consequence one can identify the hidden independent set with
high probability by solving an SVM. The following is the main result of the section.
?
?
Theorem 5.1.
? For G = G(n, 1/2, k) and k = 2t n for large enough constant t ? 1 with K as in
(4) and ? = n + k/2,
?
1
?(K) = 2(t + 1) n + O(log n) = 1 + + o(1) ?(G)
t
with probability at least 1 ? O(1/n).
?
Proof. The proof is analogous to that of Theorem 3.1. Note that |?n (G)| ? n + k/2. First we
consider the expected case where all vertices outside the planted part S are adjacent to k/2 vertices
in S and (n ? k)/2 vertices outside
part
? have degree (n ? k)/2.
? S. and all verties in the planted
n for i 6? S and ?i = 2(t + 1)2 / n for i ? S satisfy KKT
We check that ?i = 2(t + 1)/ ?
conditions with an error of O(1/ n). Now apply p
Chernoff bounds to conclude that with high
probability, all vertices in S have degree (n ? k)/2 ? (n ? k)
p log(n ? k) and those outside S are
?
adjacent to k/2 ? k log k vertices in S and to (n ? k)/2 ? (n ? k) log(n ? k) vertices ouside
?
S. Nowwe check
that the same solution satisfies KKT conditions of G(n, 1/2, k) with an error of
q
log n
= O
. Using the same arguments as in the proof of Theorem 3.1, we conclude that
n
?
?
?(K) ? 2(t + 1) n + O(log n). Since ?(G) = 2t n for this case [8], the result follows.
The above theorem suggests that the planted independent set can be recovered by taking the top k
values in the optimal solution. In the experimental section we will discuss the performance of this
recovery algorithm. The runtime of this algorithm is one call to SVM solver, which is considerably
cheaper than the SDP option. Indeed the algorithm due to [8], requires computation of ? function.
The current best known algorithm for ? computation has an O(n5 log n)[5], run time complexity. In
contrast the proposed approach needs to solve an SVM and hence scales well to large graphs. Our
approach is competitive with the state of the art [24] as it gives the same high probability guarantees
and have the same running time, O(n2 ). Here we have assumed that we are working with a SVM
solver which has a time complexity of O(n2 ) [13].
6
Experimental evaluation
Comparison with exhaustive approach [14] We generate synthetic m = 3 random graphs over
?
n vertices with average density ? = 0.2, and having single (common) quasi-clique of size k = 2 n
with density ? = 0.95 in all the three graphs. This is similar to the synthetic graphs generated
in the original paper [see 14, Section 6.1.2]. We note that both our MKL-based approach and
exhaustive search in [14] recovers the quasi-clique. However, the time requirements are drastically
different. All experiments were conducted on a computer with 16 GB RAM and Intel X3470 quadcore processor running at 2.93 GHz. Three values of k namely k = 50, 60 and k = 100 were used.
It is interesting to note that CROCHET [14] took 2 hours and 9 hours for k = 50 and k = 60 sized
cliques and failed to find a clique of size of 100. The corresponding numbers for MKL are 47.5,54.8
and 137.6 seconds respectively.
7
Common dense subgraph detection We evaluate our algorithm for finding large dense regions on
the DIMACS Challenge graphs 2 [15], which is a comprehensive benchmark for testing of clique
finding and related algorithms. For the families of dense graphs (brock, san, sanr), we focus on
finding large dense region in the complement of the original graphs.
We run Algorithm 1 using SimpleMkl3 to find large common dense subgraph. In order to evaluate the performance of our algorithm, we compute a
? = maxm a(m) and a = minm a(m) where
(m)
(m)
(m)
a
= ?(GT )/?(G ) is relative density of induced subgraph (compared to original graph density); and nT /N is relative size of induced subgraph compared to original graph size. We want
a high value of nT /N ; while a should not be lower than 1. Table 1 shows evaluation of Algorithm 1 on DIMACS dataset. We note that our algorithm finds a large subgraph (large nT /N )
with higher density compared to original graph in all of DIMACS graph classes making it suitable for finding large dense regions in multiple graphs. In all cases the size of the subgraph, nT
was more than 100. The MKL experiments reported in Table 1 took less than 1 minute (for each
graph family); while the algorithm in [14] aborts after several hours due to memory constraints.
Planted clique recovery We generate 100
random graphs based on planted clique
model G(n, 1/2, k) where
? n = 30000 and
hidden clique size k = 2t n for each choice
of t. We evaluate the recovery algorithm
discussed in Section 4.2. The SVM problem is solved using Libsvm4 . For t ? 2 we
find perfect recovery of the clique on all the
graphs, which is agreement with Theorem
5.1.
It is worth noting that the approach takes 10
minutes to recover the clique in this graph of
30000 vertices which is far beyond the scope
of SDP based procedures.
Graph family
c-fat200
c-fat500
brock200?
brock400?
brock800?
p hat300
p hat500
p hat700
p hat1000
p hat1500
san200?
san400?
sanr200?
sanr400?
N
200
500
200
400
800
300
500
700
1000
1500
200
400
200
400
M
3
4
4
4
4
3
3
3
3
3
5
3
2
2
nT
N
0.50
0.31
0.41
0.50
0.50
0.53
0.48
0.45
0.43
0.38
0.50
0.42
0.39
0.43
a
?
2.12
3.57
1.36
1.15
1.08
1.53
1.55
1.58
1.60
1.63
1.51
1.19
1.86
1.20
a
0.99
1.01
0.99
1.05
1.01
1.15
1.17
1.18
1.19
1.20
1.08
1.02
1.04
1.02
Table 1: Common dense subgraph recovery on multiple graphs in DIMACS dataset. Here a
? and a denote
the maximum and minimum relative density of the
In this paper we have established that the induced subgraph (relative to density of the original
Lov?asz ? function, well studied in graph the- graph) and nT /N is the relative size of the induced
ory can be linked to the one-class SVM for- subgraph compared to original graph size.
mulation. This link allows us to design scalable algorithms for computationally difficult
problems. In particular we have demonstrated that finding a common dense region
in multiple graphs can be solved by a MKL problem, while finding a large planted clique can be
solved by an one class SVM.
7
Conclusion
Acknowledgements
CB is grateful to Department of CSE, Chalmers University of Technology for their hospitality and
was supported by grants from ICT and Transport Areas of Advance, Chalmers University. VJ and
DD were supported by SSF grant Data Driven Secure Business Intelligence.
2
ftp://dimacs.rutgers.edu/pub/challenge/graph/benchmarks/clique/
http://asi.insa-rouen.fr/enseignants/?arakotom/code/mklindex.html
4
http://www.csie.ntu.edu.tw/?cjlin/libsvm/
3
8
References
[1] Louigi Addario-berry, Nicolas Broutin, Gbor Lugosi, and Luc Devroye. Combinatorial testing
problems. Annals of Statistics, 38:3063?3092, 2010.
[2] Noga Alon, Michael Krivelevich, and Benny Sudakov. Finding a large hidden clique in a
random graph. Random Structures and Algorithms, pages 457?466, 1998.
[3] B. Bollob?as. Modern graph theory, volume 184. Springer Verlag, 1998.
[4] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press,
New York, NY, USA, 2004.
[5] T.-H. Hubert Chan, Kevin L. Chang, and Rajiv Raman. An sdp primal-dual algorithm for
approximating the lov?asz-theta function. In ISIT, 2009.
[6] Amin Coja-Oghlan and Anusch Taraz. Exact and approximative algorithms for coloring g(n,
p). Random Struct. Algorithms, 24(3):259?278, 2004.
[7] U. Feige and D. Ron. Finding hidden cliques in linear time. In AofA10, 2010.
[8] Uriel Feige and Robert Krauthgamer. Finding and certifying a large hidden clique in a semirandom graph. Random Struct. Algorithms, 16:195?208, March 2000.
[9] Z. F?uredi and J. Koml?os. The eigenvalues of random symmetric matrices. Combinatorica,
1:233?241, 1981.
[10] Michel X. Goemans. Semidefinite programming in combinatorial optimization. Math. Program., 79:143?161, 1997.
[11] J. H?astad. Clique is hard to approximate within n1?? . Acta Mathematica, 182(1):105?142,
1999.
[12] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1990.
[13] Don R. Hush, Patrick Kelly, Clint Scovel, and Ingo Steinwart. Qp algorithms with guaranteed
accuracy and run time for support vector machines. Journal of Machine Learning Research,
7:733?769, 2006.
[14] D. Jiang and J. Pei. Mining frequent cross-graph quasi-cliques. ACM Transactions on Knowledge Discovery from Data (TKDD), 2(4):16, 2009.
[15] D.S. Johnson and M.A. Trick. Cliques, coloring, and satisfiability: second DIMACS implementation challenge, October 11-13, 1993, volume 26. Amer Mathematical Society, 1996.
[16] Donald Knuth. The sandwich theorem. Electronic Journal of Combinatorics, 1(A1), 1994.
[17] Michael Krivelevich and Benny Sudakov. Pseudo-random graphs. In More Sets, Graphs and
Numbers, volume 15 of Bolyai Society Mathematical Studies, pages 199?262. Springer Berlin
Heidelberg, 2006.
[18] V.E. Lee, N. Ruan, R. Jin, and C. Aggarwal. A survey of algorithms for dense subgraph
discovery. Managing and Mining Graph Data, pages 303?336, 2010.
[19] L. Lovasz. On the Shannon capacity of a graph. Information Theory, IEEE Transactions on,
25(1):1?7, 1979.
[20] C.J. Luz and A. Schrijver. A convex quadratic characterization of the lov?asz theta number.
SIAM Journal on Discrete Mathematics, 19(2):382?387, 2006.
[21] Claire Mathieu and Warren Schudy. Correlation clustering with noisy input. In Proceedings
of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ?10, pages
712?728, Philadelphia, PA, USA, 2010. Society for Industrial and Applied Mathematics.
[22] P. Pardalos and S. Rebennack. Computational challenges with cliques, quasi-cliques and clique
partitions in graphs. Experimental Algorithms, pages 13?22, 2010.
[23] V. Spirin and L.A. Mirny. Protein complexes and functional modules in molecular networks.
Proceedings of the National Academy of Sciences, 100(21):12123, 2003.
[24] Dekel Yael, Gurel-Gurevich Ori, and Peres Yuval. Finding hidden cliques in linear time with
high probability. In ANALCO11, 2011.
9
| 4503 |@word version:1 polynomial:3 seems:1 norm:3 dekel:1 open:2 pub:1 bhattacharyya:1 semirandom:1 existing:1 recovered:1 whp:1 nt:10 surprising:1 current:1 scovel:1 gurevich:1 readily:1 partition:1 greedy:1 cue:1 intelligence:1 inspection:1 ith:1 characterization:2 detecting:2 cse:2 node:1 provides:1 ron:1 math:1 mathematical:2 along:2 direct:2 symposium:1 consists:1 overhead:1 introduce:5 lov:19 expected:2 indeed:6 sdp:17 inspired:1 considering:1 solver:4 cardinality:1 iisc:1 begin:3 notation:2 bounded:1 sudakov:2 finding:24 neccessary:1 impractical:1 guarantee:4 pseudo:2 every:3 ti:1 concave:4 runtime:1 scaled:1 k2:2 rm:1 unit:3 grant:2 positive:3 before:1 engineering:2 insa:1 consequence:4 analyzing:1 initiated:1 jiang:1 clint:1 lugosi:1 acta:1 studied:3 suggests:1 schudy:1 range:2 practical:2 horn:1 testing:2 union:1 definite:1 procedure:1 area:2 maxx:1 significantly:1 asi:1 boyd:1 road:1 donald:1 protein:1 cannot:1 close:1 www:2 equivalent:8 demonstrated:1 rajiv:1 l:5 convex:4 independently:1 survey:1 identifying:6 immediately:1 recovery:5 subgraphs:4 importantly:1 vandenberghe:1 classic:2 embedding:2 notion:4 variation:1 analogous:1 cs:2 annals:1 play:1 user:1 exact:1 programming:2 approximative:1 agreement:1 trick:1 element:2 pa:1 approximated:5 gorithm:1 role:1 csie:1 module:1 solved:6 restatement:1 worst:1 region:7 devdatt:1 benny:2 complexity:4 ui:14 moderately:3 grateful:1 solving:18 completely:1 easily:3 various:1 kevin:1 outside:3 exhaustive:2 larger:1 solve:4 say:1 jethava:3 otherwise:1 statistic:1 noisy:1 advantage:2 eigenvalue:7 took:2 remainder:1 fr:1 frequent:1 relevant:1 subgraph:24 academy:1 amin:1 udiag:1 scalability:1 requirement:1 perfect:1 ftp:1 develop:1 alon:1 pose:1 uredi:1 progress:1 astad:1 dividing:1 strong:1 involves:1 implies:1 mulation:1 closely:2 pardalos:1 adjacency:5 require:2 kii:2 suffices:1 fix:1 ntu:1 isit:1 hold:1 cb:1 algorithmic:2 scope:1 claim:3 achieves:1 combinatorial:4 largest:3 maxm:1 establishes:3 tool:1 minimization:1 lovasz:1 hospitality:1 pn:1 minc:4 focus:2 check:2 contrast:1 secure:1 attains:1 industrial:1 anders:1 hidden:8 relation:2 quasi:4 subroutine:1 dual:1 html:2 art:8 ernet:1 ruan:1 having:4 chernoff:1 biology:1 constitutes:1 np:1 t2:1 bangalore:1 few:1 modern:2 neighbour:1 national:1 comprehensive:2 cheaper:1 n1:2 sandwich:1 recalling:2 attempt:1 detection:2 interest:1 mining:2 evaluation:2 semidefinite:5 primal:1 hubert:1 edge:4 minw:1 sweden:3 orthogonal:18 euclidean:1 theoretical:1 instance:1 kij:5 column:2 vertex:21 entry:1 subset:1 ory:1 conducted:1 johnson:2 reported:1 sv:7 considerably:1 synthetic:2 density:10 fundamental:3 siam:2 lee:1 michael:2 again:2 containing:1 return:1 michel:1 upperbound:1 rouen:1 student:1 coefficient:1 satisfy:1 combinatorics:1 ori:1 lab:6 kwk:1 observing:1 linked:1 competitive:3 recover:2 option:1 nsv:1 contribution:2 square:2 ni:5 accuracy:1 yield:2 identify:3 famous:1 identification:1 worth:1 j6:1 processor:1 minm:2 whenever:3 definition:7 mathematica:1 obvious:1 associated:3 di:5 proof:8 recovers:1 nnt:2 dataset:3 proved:1 recall:1 realm:1 knowledge:1 satisfiability:1 oghlan:1 coloring:2 attained:1 higher:1 luz:1 erd:2 formulation:5 amer:1 strongly:4 roger:1 lastly:1 uriel:1 correlation:3 sketch:1 eqn:2 working:1 steinwart:1 goteborg:3 transport:1 o:3 mkl:8 abort:1 defines:2 quality:1 usa:2 true:1 hence:5 equality:1 symmetric:3 adjacent:2 noted:1 dimacs:7 theoretic:1 demonstrate:1 performs:1 novel:2 charles:1 common:23 functional:1 qp:1 volume:3 discussed:4 martinsson:1 lieven:1 significant:2 refer:2 cambridge:2 ai:1 rd:5 mathematics:3 pm:2 etc:1 gt:3 patrick:1 showed:1 chan:1 apart:2 mint:1 driven:1 verlag:1 minimum:4 additional:1 greater:1 managing:1 surely:1 stephen:1 multiple:8 aggarwal:1 libsvm4:1 faster:1 cross:1 sphere:4 molecular:1 plugging:2 a1:1 scalable:2 n5:1 expectation:2 rutgers:1 kernel:9 want:1 completes:1 noga:1 asz:19 induced:7 subject:1 elegant:1 undirected:2 spirit:2 call:3 ssf:1 ee:2 noting:2 easy:1 enough:1 variety:3 idea:1 computable:1 bridging:1 gb:1 york:1 repeatedly:1 krivelevich:2 useful:4 se:4 clear:1 maybe:1 extensively:1 broutin:1 gbor:1 svms:1 reduced:1 http:3 generate:2 exist:5 bolyai:1 write:3 discrete:2 libsvm:1 kuk:1 ram:1 graph:100 relaxation:1 run:4 soda:1 place:1 family:7 almost:1 electronic:1 raman:1 bound:4 internet:1 guaranteed:1 correspondence:1 quadratic:2 g:8 annual:1 infinity:1 constraint:3 certifying:1 u1:2 chalmers:9 extremely:3 min:9 argument:4 optimality:2 department:5 march:1 feige:2 labellings:4 tw:1 making:1 s1:2 restricted:1 computationally:4 chiranjib:1 equation:3 turn:1 discus:1 cjlin:1 know:1 tractable:1 serf:1 end:1 koml:1 yael:1 apply:1 neighbourhood:1 paving:1 alternative:1 struct:2 original:7 denotes:1 clustering:2 remaining:1 running:3 top:1 krauthgamer:1 opportunity:2 exploit:2 uj:5 establish:2 build:1 classical:1 approximating:1 dubhashi:2 society:3 objective:2 planted:18 diagonal:2 said:1 link:1 berlin:1 capacity:1 enumerative:3 code:2 length:1 devroye:1 mini:1 bollob:1 nc:1 difficult:3 setup:1 cij:1 unfortunately:1 robert:1 october:1 mink:2 stated:1 design:3 implementation:1 cryptographic:1 pei:1 twenty:1 coja:1 upper:3 observation:1 datasets:1 benchmark:3 ingo:1 jin:1 immediate:1 defining:1 peres:1 discovered:1 introduced:3 complement:4 pair:1 namely:5 connection:5 established:1 hour:4 hush:1 beyond:3 suggested:1 below:1 xm:2 regime:1 challenge:4 program:3 recast:1 including:1 max:6 memory:1 suitable:2 demanding:1 business:1 technology:4 theta:3 mathieu:1 identifies:1 concludes:1 philadelphia:1 brock:1 sn:4 literature:1 acknowledgement:1 ict:1 berry:1 kelly:1 discovery:2 relative:5 interesting:6 degree:6 sufficient:1 verification:2 dd:1 famously:1 atleast:1 row:1 claire:1 supported:2 free:1 aij:2 drastically:1 allow:1 addario:1 warren:1 institute:1 india:1 wide:1 taking:2 sparse:4 benefit:2 ghz:1 depth:1 valid:1 unweighted:1 collection:1 san:1 far:1 social:1 transaction:2 alpha:6 approximate:4 clique:44 kkt:6 conclude:2 assumed:1 don:1 un:2 search:2 table:3 nature:2 nicolas:1 vinay:1 heidelberg:1 csa:2 complex:1 domain:1 diag:1 vj:1 tkdd:1 dense:21 main:1 n2:2 intel:1 ny:1 wish:1 exponential:1 renyi:2 theorem:17 minute:2 maxi:4 list:1 svm:36 exists:3 knuth:1 magnitude:1 labelling:25 nk:1 tc:1 led:1 failed:1 chang:1 applies:1 springer:2 satisfies:2 acm:2 chiru:1 sized:3 identity:1 luc:1 feasible:3 hard:3 specifically:2 colorability:1 louigi:1 yuval:1 lemma:2 called:1 goemans:2 duality:1 e:2 experimental:3 schrijver:1 shannon:1 combinatorica:1 support:2 arises:1 indian:1 evaluate:4 |
3,871 | 4,504 | Recovery of Sparse Probability Measures via Convex
Programming
Mert Pilanci and Laurent El Ghaoui
Electrical Engineering and Computer Science
University of California Berkeley
Berkeley, CA 94720
{mert,elghaoui}@eecs.berkeley.edu
Venkat Chandrasekaran
Department of Computing and Mathematical Sciences
California Institute of Technology
Pasadena, CA 91125
[email protected]
Abstract
We consider the problem of cardinality penalized optimization of a convex function over the probability simplex with additional convex constraints. The classical
`1 regularizer fails to promote sparsity on the probability simplex since `1 norm
on the probability simplex is trivially constant. We propose a direct relaxation of
the minimum cardinality problem and show that it can be efficiently solved using
convex programming. As a first application we consider recovering a sparse probability measure given moment constraints, in which our formulation becomes linear programming, hence can be solved very efficiently. A sufficient condition for
exact recovery of the minimum cardinality solution is derived for arbitrary affine
constraints. We then develop a penalized version for the noisy setting which can
be solved using second order cone programs. The proposed method outperforms
known rescaling heuristics based on `1 norm. As a second application we consider
convex clustering using a sparse Gaussian mixture and compare our results with
the well known soft k-means algorithm.
1
Introduction
We consider optimization problems of the following form,
p? =
min
x?C, 1T x=1, x?0
f (x) + ?card(x)
where f is a convex function, C is a convex set, card(x) denotes the number of nonzero elements of
x and ? ? 0 is a given tradeoff parameter for adjusting desired sparsity. Since the cardinality penalty
is inherently of combinatorial nature, these problems are in general not solvable in polynomial-time.
In recent years `1 norm penalization as a proxy for penalizing cardinality has attracted a great deal
of attention in machine learning, statistics, engineering and applied mathematics [1], [2], [3], [4].
However the aforementioned types of sparse probability optimization problems are not amenable to
the `1 heuristic since kxk1 = 1T x = 1 is constant on the probability simplex. Numerous problems in machine learning, statistics, finance and signal processing fall into this category however to
the authors? knowledge there is no known general convex optimization strategy for such problems
constrained on the probability simplex. The aim of this paper is to claim that the reciprocal of the
x*
(a) Level sets of the regularization function
on the probability simplex
1
maxi xi
C
(b) The sparsest probability distribution on the set C
is x? (green) which also minimizes max1i xi on the
intersection (red)
Figure 1: Probability simplex and the reciprocal of the infinity norm
1
infinity-norm, i.e., max
can be used as a convex heuristic for penalizing cardinality on the probi xi
ability simplex and the resulting relaxations can be solved via convex optimization. Figure 1(a) and
1(b) depict the level sets and an example of a sparse probability measure which has maximal infinity
norm. In the following sections we expand our discussion by exploring two specific problems: recovering a measure from given moments where f = 0 and C is affine, and convex clustering where
f is a log-likelihood and C = R. For the former case we give a sufficient condition for this convex
relaxation to exactly recover the minimal cardinality solution of p? . We then present numerical simulations for the both problems which suggest that the proposed scheme offers a very efficient convex
relaxation for penalizing cardinality on the probability simplex.
2
Optimizing over sparse probability measures
We begin the discussion by first taking an alternative approach to the cardinality penalized optimization by directly lower-bounding the original hard problem using the following relation
kxk1 =
n
X
|xi | ? card(x) max |xi | ? card(x) kxk?
i
i=1
which is essentially one of the core motivations of using `1 penalty as a proxy for cardinality.
When constrained to the probability simplex, the lower-bound for the cardinality simply becomes
1
maxi xi ? card(x). Using this bound on the cardinality, we immediately have a lower-bound on our
original NP-hard problem which we denote by p?? :
p? ? p?? :=
f (x) + ?
min
x?C,
1T x=1,
x?0
1
maxi xi
(1)
1
The function max
is concave and hence the above lower-bounding problem is not a convex
i xi
optimization problem. However below we show that the above problem can be exactly solved using
convex programming.
Proposition 2.1. The lower-bounding problem defined by p?? can be globally solved using the
following n convex programs in n + 1 dimensions:
?
?
p ? p? = min
min
f (x) + t : xi ? ?/t .
(2)
i=1,...,n
x?C, 1T x=1, x?0, t?0
Note that the constraint xi ? ?/t is jointly convex since 1/t is convex in t ? R+ , and they can be
handled in most of the general purpose convex optimizers, e.g. cvx, using either the positive inverse
function or rotated cone constraints.
Proof.
p??
=
min
x?C, 1T x=1, x?0
=
min
=
min
i
i
?
xi
?
f (x) +
xi
f (x) + min
min
x?C, 1T x=1, x?0
min
x?C,
1T x=1,
(3)
i
f (x) + t s.t.
x?0,t?0
(4)
?
?t
xi
(5)
The above formulation can be used to efficiently approximate the original cardinality constrained
problem by lower-bounding for arbitrary convex f and C. In the next section we show how to
compute the quality of approximation.
2.1
Computing a bound on the quality of approximation
By the virtue of being a relaxation to the original cardinality problem, we have the following remarkable property. Let x
? be an optimal solution to the convex program p?? , then we have the following
relation
f (?
x) + ?card(?
x) ? p? ? p??
(6)
Since the left-hand side and right-hand side of the above bound are readily available when p??
defined in (2) is solved, we immediately have a bound on the quality of relaxation. More specifically
the relaxation is exact, i.e., we find a solution for the original cardinality penalized problem, if the
following holds:
f (?
x) + ?card(?
x) = p??
It should be noted that for general cardinality penalized problems, using `1 heuristic does not yield
such a quality bound, since it is not a lower or upper bound in general. Moreover most of the known
equivalence conditions for `1 heuristics such as Restricted Isometry Property and variants are NPhard to check. Therefore a remarkable property of the proposed scheme is that it comes with a
simple computable bound on the quality of approximation.
3
Recovering a Sparse Measure
Suppose that ? is a discrete probability measure and we would like to know the sparsest measure
satisfying some arbitrary moment constraints:
p? = min card(?) : E? [Xi ] = bi , i = 1, . . . , m
?
where Xi ?s are random variables and E? denotes expectation with respect to the measure ?. One
motivation for the above problem is the fact that it upper-bounds the minimum entropy power problem:
p? ? min exp H(?) : E? [Xi ] = bi , i = 1, . . . , m
?
P
where H(?) := ? i ?i log ?i is the Shannon entropy. Both of the above problems are non-convex
and in general very hard to solve.
When viewed as a finite dimensional optimization problem the minimum cardinality problem can
be cast as a linear sparse recovery problem:
p? =
min
1T x=1, x?0
card(x) : Ax = b
(7)
As noted previously, applying the `1 heuristic doesn?t work and it does not even yield a unique
solution when the problem is underdetermined since it simply solves a feasibility problem:
p?1
=
=
min
kxk1 : Ax = b
(8)
min
1 : Ax = b
(9)
1T x=1, x?0
1T x=1, x?0
and recovers the true minimum cardinality solution if and only if the set 1T x = 1, x ? 0, Ax = b is
a singleton. This condition may hold in some cases, i.e. when the first 2k ? 1 moments are available,
i.e., A is a Vandermonde matrix where k = card(x) [6]. However in general this set is a polyhedron
containing dense vectors. Below we show how the proposed scheme applies to this problem.
Using general form in (2), the proposed relaxation is given by the following,
? ?1
? ?1
(p ) ? (p? ) = max
max
xi : Ax = b .
i=1,...,n
1T x=1, x?0
(10)
which can be solved very efficiently by solving n linear programs in n variables. The total complexity is at most O(n4 ) using a primal-dual LP solver.
It?s easy to check that strong duality holds and the dual problems are given by the following:
(p?? )?1 = max
min wT b + ? : AT w + ?1 ? ei .
i=1,...,n
w, ?
(11)
where 1 is the all ones vector and ei is all zeros with a one in only i?th coordinate.
3.1
An alternative minimal cardinality selection scheme
When the desired criteria is to find a minimum cardinality probability vector satisfying Ax = b, the
following alternative selection scheme offers a further refinement, by picking the lowest cardinality
solution among the n linear programming solutions. Define
x
?i :
=
arg
max
x
?min :
=
arg min card(?
xi )
1T x=1, x?0
xi : Ax = b
i=1,...,n
(12)
(13)
The following theorem gives a sufficient condition for the recovery of a sparse measure using the
above method.
Theorem 3.1. Assume that the solution to p? in (7) is unique and given by x? . If the following
condition holds
min
xi s.t. AS x = AS c y > 0
1T x=1, y?0, 1T y=1
?
where b = Ax and AS is the submatrix containing columns of A corresponding to non-zero elements of x? and AS c is the submatrix of remaining columns, then the convex linear program
max
1T x=1, x?0
xi : Ax = b
has a unique solution given by x? .
Let Conv(a1 , . . . , am ) denote the convex hull of the m vectors {a1 , . . . , am }. The following corollary depicts a geometric condition for recovery.
Corollary 3.2. If Conv(AS c ) does not intersect an extreme point of Conv(AS ) then x
?min = x? ,
i.e. we recover the minimum cardinality solution using n linear programs.
Proof Outline:
Consider k?th inner linear program defined in the problem p?? . Using the optimality conditions of
the primal-dual linear program pairs in (10) and (11), it can be shown that the existence of a pair
(w, ?) satisfying
ATS w + ?1
= ek
(14)
ATS c w + ?1
> 0
(15)
implies that the support of solution of the linear program is exactly equal to the support of x? , and
in particular they have the same cardinality. Since the solution of p? is unique and has minimum
cardinality, we conclude that x? is indeed the unique solution to the k?th linear program. Applying
Farkas? lemma and duality theory we arrive at the conditions defined in Theorem 3.1. The corollary
follows by first observing that the condition of Theorem 3.1 is satisfied if Conv(AS c ) does not
intersect an extreme point of Conv(AS ). Finally observe that if any of the n linear programs recover
the minimal cardinality solution then x
?min = x? , since card(?
xmin ) ? card(?
xk ), ?k.
3.2
Noisy measure recovery
When the data contains noise and inaccuracies, such as the case when using empirical moments
instead of exact moments, we propose the following noise-aware robust version, which follows
from the general recipe given in the first section:
2
min
min
kAx ? bk2 + t : xi ? ?/t .
(16)
i=1,...,n
1T x=1, x?0,t?0
where ? ? 0 is a penalty parameter for encouraging sparsity. The above problem can be solved
using n second-order cone programs in n + 1 variables, hence has O(n4 ) worst case complexity.
The proposed measure recovery algorithms are investigated and compared with a known suboptimal
heuristic in Section 6.
4
Convex Clustering
In this section we base our discussion on the exemplar based convex clustering framework of [8].
Given a set of data points {z1 , . . . , zn } of d-dimensional vectors, the task of clustering is to fit a
mixture probability model to maximize the log likelihood function
?
?
k
n
X
1X
L :=
xj f (zi ; mj )?
log ?
n i=1
j=1
where f (z; m) is an exponential family distribution on Z with parameter m, and x is a k-dimensional
vector on the probability simplex denoting the mixture weights. For the standard multivariate Nor2
mal distribution we have f (zi ; mj ) = e??kzi ?mj k2 for some parameter ? > 0. As in [8] we?ll
further assume that the mean parameter mj is one of the examples zi which is unknown a-priori.
This assumption helps to simply the log-likelihood whose data dependence is now only through a
2
kernel matrix Kij := e??kzi ?zj k2 as follows
?
?
n
k
X
X
2
1
log ?
xj e??kzi ?zj k2 ?
(17)
L =
n i=1
j=1
?
?
n
k
X
X
1
=
log ?
xj Kij ?
(18)
n i=1
j=1
Partitioning the data {z1 , . . . , zn } into few clusters is equivalent to have a sparse mixture x, i.e.,
each example is assigned to few centers (which are some other examples). Therefore to cluster the
data we propose to approximate the following cardinality penalized problem,
?
?
n
k
X
X
log ?
xj Kij ? ? ?cardx
(19)
p?c :=
max
1T x=1, x?0
i=1
j=1
As hinted previously, the above problem can be seen as a lower-bound for the entropy penalized
problem
?
?
n
k
X
X
p?c ?
max
log ?
xj Kij ? ? ? exp H(x)
(20)
1T x=1, x?0
i=1
j=1
where H(x) is the Shannon entropy of the mixture probability vector.
Applying our convexification strategy, we arrive at another upper-bound which can be computed via
convex optimization
?
?
n
k
X
X
?
p?c ? p?? :=
max
log ?
xj Kij ? ?
(21)
max
1T x=1, x?0
i xi
i=1
j=1
We investigate the above approach in a numerical example in Section 6 and compare with the wellknown soft k-means algorithm.
5
5.1
Algorithms
Exponentiated Gradient
Exponentiated gradient [7] is a proximal algorithm to P
optimize over the probability simplex which
employs the Kullback-Leibler divergence D(x, y) = i xi log xyii between two probability distributions. For minimizing a convex function ? the exponentiated gradient updates are given by the
following:
1
xk+1 = arg min ?(xk ) + ??(xk )T (x ? xk ) + D(x, xk )
x
?
When applied to the general form of 2 it yields the following updates to solve the i?th problem of
p??
?
?
X
xk+1
= rik xki / ?
rjk xkj ?
i
j
where the weights ri are exponentiated gradients:
rik = exp ?(?i f (xk ) ? ?/x2i )
We also note that the above updates can be done in parallel for the n convex programs, and they are
guaranteed to converge to the optimum.
6
6.1
Numerical Results
Recovering a Measure from Gaussian Measurements
Here we show that the proposed recovery scheme is able to recover a sparse measure exactly with
overwhelming probability, when the matrix A ? Rm?n is chosen from the independent Gaussian
ensemble, i.e, Ai,j ? N (0, 1) i.i.d.
As an alternative method we consider a commonly employed simple heuristic to optimize over
a probability measure which first drops the constraint 1T x = 1 and solves the corresponding `1
penalized problem. And finally rescales the optimal x such that 1T x = 1. In the worst case, this
procedure recovers the true solution whenever minimizing `1 -norm recovers the solution, i.e., when
there is only one feasible vector satistfying Ax = b and x ? 0, 1T x = 1. This is clearly a
suboptimal approach and we will refer it as the rescaling heuristic. We set n = 50 and randomly
pick a probability vector x? which is k sparse, let b = Ax? be m noiseless measurements, then
check the probability of recovery, i.e. x
? = x? where x
? is the solution to,
max
max
xi : Ax = b .
(22)
i=1,...,n
1T x=1, x?0
Figure 2(a) shows the probability of exact recovery as a function of m, the number of measurements,
in 100 independent realizations of A for the proposed LP formulation and the rescaling heuristic.
As it can be seen in Figure 2(a), the proposed method recovers the correct measure with probability
almost 1 when m ? 5. Quite interestingly the rescaling heuristic doesn?t succeed to recover the true
measure with high probability even for a cardinality 2 vector.
We then add normal distributed noise with standard deviation 0.1 on the observations and solve,
2
min
min
kAx ? bk2 + t : xi ? ?/t .
(23)
i=1,...,n
1T x=1, x?0,t?0
We compare the above approach by the corresponding rescaling heuristic, which first solves a nonnegative Lasso,
2
min kAx ? bk2 + ? kxk1
x?0
(24)
then rescales x such that 1T x = 1. For each realization of A and measurement noise we run both
methods using a primal-dual interior point solver for 30 equally spaced values of ? ? [0, 10] and
record the minimum error k?
x ? x? k1 . The average error over 100 realizations are shown in Figure
2(b). Is it can be seen in the figure the proposed scheme clearly outperforms the rescaling heuristic
since it can utilize the fact that x is on the probability simplex, without trivializing it?s complexity
regularizer.
Probability of Exact Recovery in 100 independent trials of A
1
Rescaling L1 Heuristic
0.9
Proposed relaxation
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1
2
3
4
5
6
m ? number of measurements (moment constraints)
7
8
9
(a) Probability of exact recovery as a function of m
Averaged error of estimating the true measure : ||x?xt||1
2
Rescaling L1 Heuristic
Proposed relaxation
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1
2
3
4
5
6
m ? number of measurements (moment constraints)
7
8
9
(b) Average error for noisy recovery as a function of m
Figure 2: A comparison of the exact recovery probability in the noiseless setting (top) and estimation
error in the noisy setting (bottom) of the proposed approach and the rescaled `1 heuristic
6.2
Convex Clustering
We generate synthetic data using a Gaussian mixture of 10 components with identity covariances
and cluster the data using the proposed method, the resulting clusters given by the mixture density is
presented in Figure 3. The centers of the circles represent the means of the mixture components and
the radii are proportional to the respective mixture weights. We then repeat the clustering procedure
using the well known soft k-means algorithm and present the results in Figure 4.
As it can be seen from the figures the proposed convex relaxation is able to penalize the cardinality
on the mixture probability vector and produce clusters significantly better than soft k-means algorithm. Note that soft k-means is a non-convex procedure whose performance depends heavily on
the initialization. The proposed approach is convex hence insensitive to the initializations. Note that
in [8] the number of clusters are adjusted indirectly by varying the ? parameter of the distribution.
In contrast our approach tries to implicitly optimizes the likelihood/cardinality tradeoff by varying
?. Hence when the number of clusters is unknown, choosing a value of ? is usually easier than
specificying a value of k for the k-means algorithms.
7
Conclusions and Future Directions
We presented a convex cardinality penalization scheme for problems constrained on the probability
simplex. We then derived a sufficient condition for recovering the sparsest probability measure in
an affine space using the proposed method. The geometric interpretation suggests that it holds for a
large class of matrices. An open theoretical question is to analyze the probability of exact recovery
for a normally distributed A. Another interesting direction is to extend the recovery analysis to the
noisy setting and arbitrary functions such as the log-likelihood in the clustering example. There
might also be other problems where proposed approach could be practically useful such as portfolio
optimization, where a sparse convex combination of assets is sought or sparse multiple kernel learn-
1.5
1.5
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
?1
?1.5
?1.5
?2
?2
?2.5
?2.5
?3
?1.5
?1
?0.5
0
0.5
1
1.5
2
?3
?1.5
?1
(a) ? = 1000
0
0.5
1
1.5
2
1
1.5
2
1
1.5
2
1
1.5
2
(b) ? = 300
1.5
1.5
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
?1
?1.5
?1.5
?2
?2
?2.5
?2.5
?3
?1.5
?0.5
?1
?0.5
0
0.5
1
1.5
2
?3
?1.5
?1
(c) ? = 100
?0.5
0
0.5
(d) ? = 45
Figure 3: Proposed convex clustering scheme
1.5
1.5
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
?1
?1.5
?1.5
?2
?2
?2.5
?2.5
?3
?1.5
?1
?0.5
0
0.5
1
1.5
2
?3
?1.5
?1
?0.5
(a) k = 3
0.5
(b) k = 4
1.5
1.5
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
?1
?1.5
?1.5
?2
?2
?2.5
?2.5
?3
?1.5
0
?1
?0.5
0
0.5
(c) k = 8
1
1.5
2
?3
?1.5
?1
?0.5
0
0.5
(d) k = 10
Figure 4: Soft k-means algorithm
ing.
Acknowledgements This work is partially supported by the National Science Foundation under
Grants No. CMMI-0969923, FRG-1160319, and SES-0835531, as well as by a University of California CITRIS seed grant, and a NASA grant No. NAS2-03144. The authors would like to thank the
Area Editor and the reviewers for their careful review of our submission.
References
[1] E.J. Cand?es, T. Tao, ?Decoding by linear programming?. IEEE Trans. Inform. Theory 51
(2005), 4203-4215.
[2] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit? SIAM Review,
43(1):129-159, 2001.
[3] A. Bruckstein, D. Donoho, and M. Elad. ?From sparse solutions of systems of equations to
sparse modeling of signals and images?. SIAM Review, 2007.
[4] V. Chandrasekaran, B. Recht, P.A. Parrilo, and A.S. Willsky. ?The convex algebraic geometry
of linear inverse problems?. In Communication, Control, and Computing (Allerton), 2010 48th
Annual Allerton Conference on, pages 699-703, 2010.
[5] S. Boyd and L. Vandenberghe, ?Convex Optimization?. Cambridge, U.K.: Cambridge Univ.
Press, 2003.
[6] A. Cohen and A. Yeredor, ?On the use of sparsity for recovering discrete probability distributions from their moments?. Statistical Signal Processing Workshop (SSP), 2011 IEEE
[7] J. Kivinen and M. Warmuth. ?Exponentiated gradient versus gradient descent for linear predictors?. Information and Computation, 132(1):1-63, 1997.
[8] D. Lashkari and P. Golland, ?Convex clustering with exemplar-based models?, in NIPS, 2008.
| 4504 |@word trial:1 version:2 polynomial:1 norm:7 open:1 simulation:1 covariance:1 decomposition:1 pick:1 moment:9 contains:1 denoting:1 interestingly:1 outperforms:2 attracted:1 readily:1 numerical:3 drop:1 depict:1 farkas:1 update:3 warmuth:1 xk:8 probi:1 reciprocal:2 core:1 record:1 allerton:2 mathematical:1 direct:1 indeed:1 cand:1 yeredor:1 globally:1 encouraging:1 overwhelming:1 cardinality:30 solver:2 becomes:2 begin:1 conv:5 moreover:1 estimating:1 lowest:1 minimizes:1 berkeley:3 concave:1 finance:1 exactly:4 k2:3 rm:1 partitioning:1 normally:1 grant:3 control:1 positive:1 engineering:2 xyii:1 laurent:1 might:1 initialization:2 equivalence:1 suggests:1 bi:2 averaged:1 unique:5 atomic:1 optimizers:1 procedure:3 intersect:2 area:1 empirical:1 significantly:1 boyd:1 suggest:1 interior:1 selection:2 applying:3 optimize:2 equivalent:1 reviewer:1 center:2 attention:1 convex:39 recovery:16 immediately:2 vandenberghe:1 coordinate:1 suppose:1 heavily:1 exact:8 programming:6 element:2 satisfying:3 submission:1 convexification:1 kxk1:4 bottom:1 electrical:1 solved:9 worst:2 mal:1 rescaled:1 xmin:1 lashkari:1 complexity:3 solving:1 basis:1 regularizer:2 univ:1 choosing:1 saunders:1 whose:2 heuristic:16 quite:1 solve:3 elad:1 s:1 ability:1 statistic:2 jointly:1 noisy:5 propose:3 mert:2 maximal:1 realization:3 recipe:1 cluster:7 optimum:1 produce:1 rotated:1 help:1 develop:1 exemplar:2 strong:1 solves:3 recovering:6 come:1 implies:1 direction:2 radius:1 correct:1 hull:1 frg:1 proposition:1 underdetermined:1 hinted:1 adjusted:1 exploring:1 hold:5 practically:1 normal:1 exp:3 great:1 seed:1 claim:1 sought:1 purpose:1 estimation:1 combinatorial:1 clearly:2 gaussian:4 aim:1 varying:2 corollary:3 derived:2 ax:12 polyhedron:1 likelihood:5 check:3 contrast:1 am:2 el:1 pasadena:1 relation:2 expand:1 tao:1 arg:3 aforementioned:1 dual:4 among:1 priori:1 constrained:4 equal:1 aware:1 promote:1 future:1 simplex:14 np:1 few:2 employ:1 randomly:1 divergence:1 national:1 geometry:1 investigate:1 mixture:10 extreme:2 primal:3 amenable:1 respective:1 desired:2 circle:1 theoretical:1 minimal:3 kij:5 column:2 soft:6 modeling:1 zn:2 deviation:1 predictor:1 eec:1 proximal:1 synthetic:1 recht:1 density:1 siam:2 decoding:1 picking:1 satisfied:1 containing:2 ek:1 rescaling:8 parrilo:1 singleton:1 rescales:2 depends:1 try:1 observing:1 analyze:1 red:1 recover:5 parallel:1 efficiently:4 ensemble:1 yield:3 spaced:1 asset:1 inform:1 whenever:1 proof:2 recovers:4 adjusting:1 knowledge:1 nasa:1 formulation:3 done:1 hand:2 ei:2 quality:5 true:4 former:1 regularization:1 assigned:1 hence:5 nonzero:1 leibler:1 deal:1 ll:1 noted:2 criterion:1 outline:1 l1:2 image:1 xkj:1 cohen:1 insensitive:1 extend:1 interpretation:1 measurement:6 refer:1 cambridge:2 ai:1 trivially:1 mathematics:1 portfolio:1 base:1 add:1 multivariate:1 isometry:1 recent:1 optimizing:1 optimizes:1 wellknown:1 caltech:1 seen:4 minimum:9 additional:1 employed:1 converge:1 maximize:1 signal:3 multiple:1 ing:1 offer:2 equally:1 a1:2 feasibility:1 kax:3 variant:1 essentially:1 expectation:1 noiseless:2 kernel:2 represent:1 penalize:1 golland:1 easy:1 xj:6 fit:1 zi:3 lasso:1 suboptimal:2 inner:1 tradeoff:2 computable:1 handled:1 penalty:3 algebraic:1 useful:1 category:1 generate:1 zj:2 discrete:2 penalizing:3 utilize:1 relaxation:11 cone:3 year:1 run:1 inverse:2 arrive:2 family:1 chandrasekaran:2 almost:1 cvx:1 submatrix:2 bound:12 guaranteed:1 nonnegative:1 annual:1 constraint:9 infinity:3 ri:1 min:27 optimality:1 xki:1 department:1 combination:1 lp:2 n4:2 restricted:1 ghaoui:1 equation:1 previously:2 know:1 available:2 pursuit:1 observe:1 indirectly:1 alternative:4 existence:1 original:5 denotes:2 clustering:10 remaining:1 top:1 k1:1 classical:1 question:1 strategy:2 cmmi:1 dependence:1 ssp:1 gradient:6 thank:1 card:13 willsky:1 rjk:1 minimizing:2 unknown:2 upper:3 observation:1 finite:1 descent:1 communication:1 arbitrary:4 cast:1 pair:2 z1:2 california:3 inaccuracy:1 nip:1 trans:1 able:2 below:2 usually:1 sparsity:4 elghaoui:1 program:13 green:1 max:14 power:1 solvable:1 kivinen:1 scheme:9 technology:1 x2i:1 numerous:1 review:3 geometric:2 acknowledgement:1 interesting:1 proportional:1 versus:1 remarkable:2 penalization:2 vandermonde:1 foundation:1 affine:3 sufficient:4 proxy:2 rik:2 bk2:3 editor:1 penalized:8 repeat:1 supported:1 side:2 exponentiated:5 institute:1 fall:1 taking:1 sparse:16 distributed:2 dimension:1 doesn:2 author:2 commonly:1 refinement:1 kzi:3 approximate:2 implicitly:1 kullback:1 bruckstein:1 conclude:1 xi:26 learn:1 nature:1 mj:4 pilanci:1 ca:2 inherently:1 robust:1 investigated:1 dense:1 bounding:4 motivation:2 noise:4 venkat:1 nphard:1 depicts:1 fails:1 sparsest:3 exponential:1 theorem:4 specific:1 xt:1 maxi:3 virtue:1 workshop:1 chen:1 easier:1 entropy:4 intersection:1 simply:3 kxk:1 partially:1 applies:1 succeed:1 viewed:1 identity:1 donoho:2 careful:1 feasible:1 hard:3 specifically:1 wt:1 lemma:1 total:1 duality:2 e:1 shannon:2 support:2 |
3,872 | 4,505 | Privacy Aware Learning
1
John C. Duchi1
Michael I. Jordan1,2
Martin J. Wainwright1,2
Department of Electrical Engineering and Computer Science, 2 Department of Statistics
University of California, Berkeley
Berkeley, CA USA 94720
{jduchi,jordan,wainwrig}@eecs.berkeley.edu
Abstract
We study statistical risk minimization problems under a version of privacy in
which the data is kept confidential even from the learner. In this local privacy
framework, we establish sharp upper and lower bounds on the convergence rates
of statistical estimation procedures. As a consequence, we exhibit a precise tradeoff between the amount of privacy the data preserves and the utility, measured by
convergence rate, of any statistical estimator.
1
Introduction
There are natural tensions between learning and privacy that arise whenever a learner must aggregate
data across multiple individuals. The learner wishes to make optimal use of each data point, but the
providers of the data may wish to limit detailed exposure, either to the learner or to other individuals.
It is of great interest to characterize such tensions in the form of quantitative tradeoffs that can be
both part of the public discourse surrounding the design of systems that learn from data and can be
employed as controllable degrees of freedom whenever such a system is deployed.
We approach this problem from the point of view of statistical decision theory. The decisiontheoretic perspective offers a number of advantages. First, the use of loss functions and risk functions
provides a compelling formal foundation for defining ?learning,? one that dates back to Wald [28] in
the 1930?s, and which has seen continued development in the context of research on machine learning over the past two decades. Second, by formulating the goals of a learning system in terms of
loss functions, we make it possible for individuals to assess whether the goals of a learning system
align with their own personal utility, and thereby determine the extent to which they are willing to
sacrifice some privacy. Third, an appeal to decision theory permits abstraction over the details of
specific learning procedures, providing (under certain conditions) minimax lower bounds that apply
to any specific procedure. Finally, the use of loss functions, in particular convex loss functions, in
the design of a learning system allows powerful tools of optimization theory to be brought to bear.
In more formal detail, our framework is as follows. Given a compact convex set ? ? Rd , we
wish to find a parameter value ? ? ? achieving good average performance under a loss function
" : X ? Rd ? R+ . Here the value "(X, ?) measures the performance of the parameter vector ? ? ?
on the sample X ? X , and "(x, ?) : Rd ? R+ is convex for x ? X . We measure the expected
performance of ? ? ? via the risk function
R(?) := E["(X, ?)].
(1)
In the standard formulation of statistical risk minimization, a method M is given n samples
X1 , . . . , Xn , and outputs an estimate ?n approximately minimizing R(?). Instead of allowing M
access to the samples Xi , however, we study the effect of giving only a perturbed view Zi of each
datum Xi , quantifying the rate of convergence of R(?n ) to inf ??? R(?) as a function of both the
number of samples n and the amount of privacy Zi provides for Xi .
1
There is a long history of research at the intersection of privacy and statistics, where there is a natural
competition between maintaining the privacy of elements in a dataset {X1 , . . . , Xn } and the output
of statistical procedures. Study of this issue goes back at least to the 1960s, when Warner [29]
suggested privacy-preserving methods for survey sampling. Recently, there has been substantial
work on privacy?focusing on a measure known as differential privacy [12]?in statistics, computer
science, and other fields. We cannot hope to do justice to the large body of related work, referring
the reader to the survey by Dwork [10] and the statistical framework studied by Wasserman and
Zhou [30] for background and references.
In this paper, we study local privacy [13, 17], in which each datum Xi is kept private from the
method M. The goal of many types of privacy is to guarantee that the output ?!n of the method M
based on the data cannot be used to discover information about the individual samples X1 , . . . , Xn ,
but locally private algorithms access only disguised views of each datum Xi . Local algorithms
are among the most classical approaches to privacy, tracing back to Warner?s work on randomized
response [29], and rely on communication only of some disguised view Zi of each true sample Xi .
Locally private algorithms are natural when the providers of the data?the population sampled to
give X1 , . . . , Xn ?do not trust even the statistician or statistical method M, but the providers are
interested in the parameters ?? minimizing R(?). For example, in medical applications, a participant
may be embarrassed about his use of drugs, but if the loss " is able to measure the likelihood of
developing cancer, the participant has high utility for access to the optimal parameters ?? . In essence,
we would like the statistical procedure M to learn from the data X1 , . . . , Xn but not about it.
Our goal is to understand the fundamental tradeoffs between maintaining privacy while still retaining the utility of the statistical inference method M. Though intuitively there must be some tradeoff,
quantifying it precisely has been difficult. In the machine learning literature, Chaudhuri et al. [7]
develop differentially private empirical risk minimization algorithms, and Dwork and Lei [11] and
Smith [26] analyze similar statistical procedures, but do not show that there must be negative effects
of privacy. Rubinstein et al. [24] are able to show that it is impossible to obtain a useful parameter
vector ? that is substantially differentially private; it is unclear whether their guarantees are improvable. Recent work by Hall et al. [15] gives sharp minimax rates of convergence for differentially
private histogram estimation. Blum et al. [5] also give lower bounds on the closeness of certain
statistical quantities computed from the dataset, though their upper and lower bounds do not match.
Sankar et al. [25] provide rate-distortion theorems for utility models involving information-theoretic
quantities, which has some similarity to our risk-based framework, but it appears challenging to
map their setting onto ours. The work most related to ours is probably that of Kasiviswanathan et al.
[17], who show that that locally private algorithms coincide with concepts that can be learned with
polynomial sample complexity in Kearns?s statistical query (SQ) model. In contrast, our analysis
addresses sharp rates of convergence, and applies to estimators for a broad class of convex risks (1).
2
Main results and approach
Our approach to local privacy is based on a worst-case measure of mutual information, where we
view privacy preservation as a game between the providers of the data?who wish to preserve
privacy?and nature. Recalling that the method sees only the perturbed version Zi of Xi , we adopt
a uniform variant of the mutual information I(Zi ; Xi ) between the random variables Xi and Zi
as our measure for privacy. This use of mutual information is by no means original [13, 25], but
because standard mutual information has deficiencies as a measure of privacy [e.g. 13], we say the
distribution Q generating Z from X is private only if I(X; Z) is small for all possible distributions
P on X (possibly subject to constraints). This is similar to the worst-case information approach of
Evfimievski et al. [13], which limits privacy breaches. (In the long version of this paper [9] we also
consider differentially private algorithms.)
The central consequences of our main results are, under standard conditions on the loss functions ",
sharp upper and lower bounds on the possible convergence rates for estimation procedures when we
wish to guarantee a level of privacy I(Xi ; Zi ) ? I ? . We show there are problem dependent constants
a(?, ") and ?
b(?, ") such that the rates of convergence of all possible procedures are lower bounded
?
by a(?, ")/ nI ? and that there exist procedures achieving convergence rates of b(?, ")/ nI ? ,
where the ratio b(?, ")/a(?, ") is upper bounded by a universal constant. Thus, we establish and
quantify explicitly the tradeoff between statistical estimation and the amount of privacy.
2
We show that stochastic gradient descent is one procedure that achieves the optimal convergence
rates, which means additionally that our upper bounds apply in streaming and online settings, requiring only a fixed-size memory footprint. Our subsequent analysis builds on this favorable property of gradient-based methods, whence we focus on statistical estimation procedures that access
data through the subgradients of the loss functions ?"(X, ?). This is a natural restriction. Gradients
of the loss " are asymptotically sufficient [18] (in an asymptotic sense, gradients contain all of the
statistical information for risk minimization problems), stochastic gradient-based estimation procedures are (sample) minimax optimal and Bahadur efficient [23, 1, 27, Chapter 8], many estimation
procedures are gradient-based [20, 6], and distributed optimization procedures that send gradient
information across a network to a centralized procedure M are natural [e.g. 3]. Our mechanism
gives M access to a vector Zi that is a stochastic (sub)gradient of the loss evaluated on the sample
Xi at a parameter ? of the method?s choosing:
E[Zi | Xi , ?] ? ?"(Xi , ?),
(2)
where ?"(Xi , ?) denotes the subgradient set of the function ? '? "(Xi , ?). In a sense, the unbiasedness of the subgradient inclusion (2) is information-theoretically necessary [1].
To obtain upper and lower bound on the convergence rate of estimation procedures, we provide a
two-part analysis. One part requires studying saddle points of the mutual information I(X; Z) (as a
function of the distributions P of X and Q(? | X) of Z) under natural constraints that allow inference
of the optimal parameters ?? for the risk R. We show that for certain classes of loss functions " and
constraints on the communicated version Zi of the data Xi , there is a unique distribution Q(? | Xi )
that attains the smallest possible mutual information I(X; Z) for all distributions on X. Using this
unique distribution, we can adapt information-theoretic techniques for obtaining lower bounds on
estimation [31, 1] to derive our lower bounds. The uniqueness results for the conditional distribution
Q show that no algorithm guaranteeing privacy between M and the samples Xi can do better. We
can obtain matching upper bounds by application of known convergence rates for stochastic gradient
and mirror descent algorithms [20, 21], which are computationally efficient.
3
Optimal learning rates and tradeoffs
Having outlined our general approach, we turn in this section to providing statements of our main
results. Before doing so, we require some formalization of our notions of privacy and error measures,
which we now provide.
3.1
Optimal Local Privacy
We begin by describing in slightly more detail the communication protocol by which information
about the random variables X is communicated to the procedure M. We assume throughout that
there exist two d-dimensional compact sets C, D, where C ? int D ? Rd , and we have that
?"(x, ?) ? C for all ? ? ? and x ? X . We wish to maximally ?disguise? the random variable
X with the random variable Z satisfying Z ? D. Such a setting is natural; indeed, many online
optimization and stochastic approximation algorithms [34, 21, 1] assume that for any x ? X and
? ? ?, if g ? ?"(x, ?) then (g( ? L for some norm (?(. We may obtain privacy by allowing a
perturbation to the subgradient g so long as the perturbation lives in a (larger) norm ball of radius
M ? L, so that C = {g ? Rd : (g( ? L} ? D = {g ? Rd : (g( ? M }.
Now let X have distribution P , and for each x ? X , let Q(? | x) denote the regular conditional
probability measure of Z given that X = x. Let Q(?) denote the marginal probability defined by
Q(A) = EP [Q(A | X)]. The mutual information between X and Z is the expected KullbackLeibler (KL) divergence between Q(? | X) and Q(?):
I(P, Q) = I(X; Z) := EP [Dkl (Q(? | X)||Q(?))] .
(3)
We view the problem of privacy as a game between the adversary controlling P and the data owners,
who use Q to obscure the samples X. In particular, we say a distribution Q guarantees a level of
privacy I ? if and only if supP I(P, Q) ? I ? . (Evfimievski et al. [13, Definition 6] present a similar
condition.) Thus we seek a saddle point P ? , Q? such that
sup I(P, Q? ) ? I(P ? , Q? ) ? inf I(P ? , Q),
Q
P
3
(4)
where the first supremum is taken over all distributions P on X such that ?"(X, ?) ? C with
P -probability 1, and the infimum is taken over all regular conditional distributions Q such that if
Z ? Q(? | X), then Z ? D and EQ [Z | X, ?] = ?"(X, ?). Indeed, if we can find P ? and Q?
satisfying the saddle point (4), then the trivial direction of the max-min inequality yields
sup inf I(P, Q) = I(P ? , Q? ) = inf sup I(P, Q).
P
Q
Q
P
To fully formalize this idea and our notions of privacy, we define two collections of probability
measures and associated losses. For sets C ? D ? Rd , we define the source set
P (C) := {Distributions P such that supp P ? C}
and the set of regular conditional distributions (r.c.d.?s), or communicating distributions,
#
"
$
Q (C, D) := r.c.d.?s Q s.t. supp Q(? | c) ? D and
zdQ(z | c) = c for c ? C .
(5a)
(5b)
D
The definitions (5a) and (5b) formally define the sets over which we may take infima and suprema
in the saddle point calculations, and they capture what may be communicated. The conditional
distributions Q ? Q (C, D) are defined so that if ?"(x, ?) ? C then EQ [Z | X, ?] :=
%
zdQ
(z | ?"(x, ?)) = ?"(x, ?). We now make the following key definition:
D
Definition 1. The conditional distribution Q? satisfies optimal local privacy for the sets
C ? D ? Rd at level I ? if
sup I(P, Q? ) = inf sup I(P, Q) = I ? ,
Q
P
P
where the supremum is taken over distributions P ? P (C) and the infimum is taken over regular
conditional distributions Q ? Q (C, D).
If a distribution Q? satisfies optimal local privacy, then it guarantees that even for the worst possible
distribution on X, the information communicated about X is limited. In a sense, Definition 1
captures the natural competition between privacy and learnability. The method M specifies the
set D to which the data Z it receives must belong; the ?teachers,? or owners of the data X, choose
the distribution Q to guarantee as much privacy as possible subject to this constraint. Using this
mechanism, if we can characterize a unique distribution Q? attaining the infimum (4) for P ? (and
by extension, for any P ), then we may study the effects of privacy between the method M and X.
3.2
Minimax error and loss functions
Having defined our privacy metric, we now turn to our original goal: quantification of the effect
privacy has on statistical estimation rates. Let M denote any statistical procedure or method (that
uses n stochastic gradient samples) and let ?n denote the output of M after receiving n such samples.
Let P denote the distribution according to which samples X are drawn. We define the (random) error
of the method M on the risk R(?) = E["(X, ?)] after receiving n sample gradients as
$n (M, ", ?, P ) := R(?n ) ? inf R(?) = EP ["(X, ?n )] ? inf EP ["(X, ?)].
???
???
(6)
In our settings, in addition to the randomness in the sampling distribution P , there is additional
randomness from the perturbation applied to stochastic gradients of the objective "(X, ?) to mask X
from the statistitician. Let Q denote the regular conditional probability?the channel distribution?
whose conditional part is defined on the range of the subgradient mapping ?"(X, ?). As the output
?n of the statistical procedure M is a random function of both P and Q, we measure the expected
sub-optimality of the risk according to both P and Q. Now, let L be a collection of loss functions,
where L(P ) denotes the losses " : supp P ? ? ? R belonging to L. We define the minimax error
$?n (L, ?) := inf
sup
M "?L(P ),P
EP,Q [$n (M, ", ?, P )],
(7)
where the expectation is taken over the random samples X ? P and Z ? Q(? | X). We characterize
the minimax error (7) for several classes of loss functions L(P ), giving sharp results when the
privacy distribution Q satisfies optimal local privacy.
We assume that our collection of loss functions obey certain natural smoothness conditions, which
are often (as we see presently) satisfied. We define the class of losses as follows.
4
Definition 2. Let L > 0 and p ? 1. The set of (L, p)-loss functions are those measurable functions
" : X ? ? ? R such that x ? X , the function ? '? "(x, ?) is convex and
|"(x, ?) ? "(x, ?# )| ? L (? ? ?# (q
(8)
for any ?, ?# ? ?, where q is the conjugate of p: 1/p + 1/q = 1.
A loss " satisfies the condition (8) if and only if for all ? ? ?, we have the inequality (g(p ? L for
any subgradient g ? ?"(x, ?) (e.g. [16]). We give a few standard examples of such loss functions.
First, we consider finding a multi-dimensional median, in which case the data x ? Rd and "(x, ?) =
L (? ? x(1 . This loss is L-Lipschitz with respect to the "1 norm, so it belongs to the class of (L, ?)
losses. A second example includes classification problems, using either the hinge loss or logistic
regression loss. In these cases, the data comes in pairs x = (a, b), where a ? Rd is the set of
regressors and b ? {?1, 1} is the label; the losses are
"(x, ?) = [1 ? b .a, ?/]+ or "(x, ?) = log (1 + exp(?b .a, ?/))
By computing (sub)gradients, we may verify that each of these belong to the class of (L, p)-losses
if and only if the data a satisfies (a(p ? L, which is a common assumption [7, 24].
The privacy-guaranteeing channel distributions Q? we construct in Section 4 are motivated by our
concern with the (L, p) families of loss functions. In our model of computation, the learning method
M queries the loss "(Xi , ?) at the point ?; the owner of the datum Xi then computes the subgradient
?"(Xi , ?) and returns a masked version Zi with the property that E[Zi | Xi , ?] ? ?"(Xi , ?). In
the following two theorems, we give lower bounds on $?n for the (L, ?) and (L, 1) families of loss
functions under the constraint that the channel distribution Q must guarantee that a limited amount
of information I(Xi ; Zi ) is communicated: the channel distribution Q satisfies our Definition 1 of
optimal local privacy.
3.3
Main theorems
We now state our two main theorems, deferring proofs to Appendix B. Our first theorem applies to
the class of (L, ?) loss functions (recall Definition 2). We assume that the set to which the perturbed
data Z must belong is [?M? , M? ]d , where M? ? L. We state two variants of the theorem, as one
gives sharper results for an important special case.
Theorem 1. Let L be the collection of (L, ?) loss functions and assume the conditions of the
preceding paragraph. Let Q satisfy be optimally private for the collection L. Then
(a) If ? contains the "? ball of radius r,
$?n (L, ?) ?
(b) If ? = {? ? Rd : (?(1 ? r},
$?n (L, ?)
1 M? rd
? ? .
163
n
&
rM? log(2d)
?
?
.
17 n
For our second theorem, we assume that the loss functions L consist of (L, 1) losses, and that the
perturbed data must belong to the "1 ball of radius M1 , i.e., Z ? {z ? Rd | (z(1 ? M1 }. Setting
M = M1 /L, we define (these constants relate to the optimal local privacy distribution for "1 -balls)
'
(
&
2d ? 2 + (2d ? 2)2 + 4(M 2 ? 1)
e? ? e??
? := log
, and ?(?) := ?
. (9)
2(M ? 1)
e + e?? + 2(d ? 1)
Theorem 2. Let L be the collection of (L, 1) loss functions and assume the conditions of the preceding paragraph. Let Q be optimally locally private for the collection L. Then
?
rL d
1
?
.
??
$n (L, ?) ?
163
n?(?)
5
Remarks We make two main remarks about Theorems 1 and 2. First, we note that each result
yields a minimax rate for stochastic optimization problems when there is no random distribution Q.
Indeed, in Theorem 1, we may take M? = L,
(focusing on the second statement of
& in which case
?
the theorem) we obtain the lower bound rL log(2d)/17 n when ? = {? ? Rd : (?(1 ? r}.
Mirror descent algorithms [20, 21] attain a matching upper bound (see the long version of this
paper [9, Sec. 3.3] for more substantial explanation). Moreover, our analysis is sharper than previous
analyses [1, 20], as none (to our knowledge) recover the logarithmic dependence on the dimension
d, which is evidently necessary. Theorem 2 provides a similar result when we take M1 ? L, though
in this case stochastic gradient descent attains the matching upper bound.
Our second set of remarks are somewhat more striking. In these, we show that the lower bounds in
Theorems 1 and 2 give sharp tradeoffs between the statistical rate of convergence for any statistical
procedure and the desired privacy of a user. We present two corollaries establishing this tradeoff. In
each corollary, we look ahead to Section 4 and use one of Propositions 1 or 2 to derive a bijection
between the size M? or M1 of the perturbation set and the amount of privacy?as measured by the
worst case mutual information I ? ?provided. We then combine Theorems 1 and 2 with results on
stochastic approximation to demonstrate the tradeoffs.
Corollary 1. Let the conditions of Theorem 1(b) hold, and assume that M? ? 2L. Assume Q?
satisfies optimal local privacy at information level I ? . For universal constants c ? C,
?
?
rL d log d
rL d log d
?
c? ?
? $n (L, ?) ? C ? ?
.
nI ?
nI ?
Proof Since ? ? {? ? Rd : (?(1 ? r}, mirror descent [2, 21, 20, Chapter 5], using n unbiased stochastic
samples whose "? norms are bounded by M? , obtains convergence
? gradient
?
rate O(M? r log d/ n). This matches the second statement of Theorem 1. Now fix our desired
amount of mutual information I ? . From the remarks following Proposition 1, if we must guarantee
that I ? ? supP I(P, Q) for any distribution P and loss function " whose gradients are bounded in
"? -norm by L, we must (by the remarks following Proposition 1) have
dL2
I? 2 2 .
M?
Up to higher-order terms, to guarantee
a
level
of
privacy
with mutual information I ? , we must allow
&
gradient noise up to M? = L d/I ? . Using the bijection between M? and the maximal?allowed
?
mutual information I ? under local privacy that we have shown, we substitute M? = L d/ I ?
into the upper and lower bounds that we have already attained.
Similar upper and lower bounds can be obtained under the conditions
of part (a) of Theorem 1,
?
where we need not assume ? is an "1 -ball, but we lose a factor of log d in the lower bound. Now
we turn to a parallel result, but applying Theorem 2 and Proposition 2.
Corollary 2. Let the conditions of Theorem 2 hold and assume that M1 ? 2L. Assume that Q?
satisfies optimal local privacy at information level I ? . For universal constants c ? C,
rLd
rLd
? $?n (L, ?) ? C ? ?
c? ?
.
nI ?
nI ?
Proof By the conditions of optimal local privacy (Proposition 2 and Corollary 3), to have I ? ?
supP I(P, Q) for any loss " whose gradients are bounded in "1 -norm by L, we must have
I? 2
dL2
,
2M12
&
using Corollary 3. Rewriting this, we see that we must have M1 = L d/2I ? (to higher-order
terms) to be able to guarantee an amount of privacy I ? . As in the "? case, we have a bijection
between the multiplier M1 and the amount of information I ? and can apply similar techniques.
Indeed, stochastic gradient descent (SGD) enjoys the following convergence guarantees (e.g. [21]).
Let ? ? Rd be contained in the "? ball of radius r and the
of the loss " belong to the
? gradients
?
"1 -ball of radius M1 . Then SGD has $?n (L, ?) ? CM1 r d/ n. Now apply the lower bound
provided by Theorem 2 and substitute for M1 .
6
4
Saddle points, optimal privacy, and mutual information
In this section, we explore conditions for a distribution Q? to satisfy optimal local privacy, as given
by Definition 1. We give characterizations of necessary and sufficient conditions based on the compact sets C ? D for distributions P ? and Q? to achieve the saddle point (4). Our results can be
viewed as rate distortion theorems [14, 8] (with source P and channel Q) for certain compact alphabets, though as far as we know, they are all new. Thus we sometimes refer to the conditional
distribution Q, which is designed to maintain the privacy of the data X by communication of Z, as
the channel distribution. Since we wish to bound I(X; Z) for arbitrary losses ", we must address the
case when "(X, ?) = .?, X/, in which case ?"(X, ?) = X; by the data-processing inequality [14,
Chapter 5] it is thus no loss of generality to assume that X ? C and that E[Z | X] = X.
We begin by defining the types of sets C and D that we use in our characterization of privacy. As
we see in Section 3, such sets are reasonable for many applications. We focus on the case when the
compact sets C and D are (suitably symmetric) norm balls:
Definition 3. Let C ? Rd be a compact convex set with extreme points ui ? Rd , i ? I for some
index set I. Then C is rotationally invariant through its extreme points if (ui (2 = (uj (2 for each
i, j, and for any unitary matrix U such that U ui = uj for some i 3= j, then U C = C.
Some examples of convex sets rotationally invariant through their extreme points include "p -norm
balls for p = 1, 2, ?, though "p -balls for p 3? {1, 2, ?} are not. The following theorem gives
a general characterization of the minimax mutual information for rotationally invariant norm balls
with finite numbers of extreme points by providing saddle point distributions P ? and Q? . We provide
the proof of Theorem 3 in Section A.1.
Theorem 3. Let C be a compact, convex, polytope rotationally invariant through its extreme points
?
{ui }m
i=1 and D = (1 + ?)C for some ? > 0. Let Q be the conditional distribution on Z | X that
maximizes the entropy H(Z | X = x) subject to the constraints that
EQ [Z | X = x] = x
for x ? C and that Z is supported on (1 + ?)ui for i = 1, . . . , m. Then Q? satisfies Definition 1,
optimal local privacy, and Q? is (up to measure zero sets) unique. Moreover, the distribution P ?
uniform on {ui }m
i=1 uniquely attains the saddle point (4).
Remarks: While in the theorem we assume that Q? (? | X = x) maximizes the entropy for each
x ? C, this is not in fact essential. In fact, we may introduce a random variable X # between X and
#
Z: let X # be distributed among the extreme points {ui }m
i=1 of C in any way such that E[X | X] =
?
X, then use the maximum entropy distribution Q (? | ui ) defined in the theorem when X ? {ui }m
i=1
to sample Z from X # . The information processing inequality [14, Chapter 5] guarantees the Markov
chain X ? X # ? Z satisfies the minimax bound I(X; Z) ? inf Q supP I(P, Q).
With Theorem 3 in place, we can explicitly characterize the distributions achieving optimal local
privacy (recall Definition 1) for "1 and "? balls. We present the propositions in turn, providing
some discussion here and deferring proofs to Appendices A.2 and A.3.
First, consider the case where X ? [?1, 1]d and Z ? [?M, M ]d . For notational convenience, we
define the binary entropy h(p) = ?p log p ? (1 ? p) log(1 ? p). We have
Proposition 1. Let X ? [?1, 1]d and Z ? [?M, M ]d be random variables with M ? 1 and
E[Z | X] = X almost surely. Define Q? to be the conditional distribution on Z | X such that the
coordinates of Z are independent, have range {?M, M }, and
1
Xi
1
Xi
Q? (Zi = M | X) = +
and Q? (Zi = ?M | X) = ?
.
2 2M
2 2M
Then Q? satisfies Definition 1, optimal local privacy, and moreover,
)
*
1
1
sup I(P, Q? ) = d ? d ? h
+
.
2 2M
P
Before continuing, we give a more intuitive understanding of Proposition 1. Concavity implies that
for a, b > 0, log(a) ? log b + b?1 (a ? b), or ? log(a) ? ? log(b) + b?1 (b ? a), so in particular
*
)
*)
* )
*)
*
)
1
1
1
1
1
1
1
1
1
??
? log 2 ?
?
? log 2 +
= log 2? 2 .
+
+
?
h
2 2M
2 2M
M
2 2M
M
M
7
That is, we have for any distribution P on X ? [?1, 1]d that (in natural logarithms)
d
d
I(P, Q? ) ? 2 and I(P, Q? ) = 2 + O(M ?3 ).
M
M
+
,
+
,
We now consider the case when X ? x ? Rd | (x(1 ? 1 and Z ? z ? Rd | (z(1 ? M . Here
the arguments are slightly more complicated, as the coordinates of the random variables are no
longer independent, but Theorem 3 still allows us to explicitly characterize the saddle point of the
mutual information.
Proposition 2. Let X ? {x ? Rd | (x(1 ? 1} and Z ? {z ? Rd | (z(1 ? M } be random
variables with M > 1. Define the parameter ? as in Eq. (9), and let Q? be the distribution on Z | X
such that Z is supported on {?M ei }di=1 , and
e?
,
(10a)
Q? (Z = M ei | X = ei ) = ?
??
e + e + (2d ? 2)
e??
Q? (Z = ?M ei | X = ei ) = ?
,
(10b)
??
e + e + (2d ? 2)
1
.
(10c)
Q? (Z = ?M ej | X = ei , j 3= i) = ?
??
e + e + (2d ? 2)
(For X 3? {?ei }, define X # to be randomly selected in any way from among {?ei } such that
E[X # | X] = X, then sample Z conditioned on X # according to (10a)?(10c).) Then Q? satisfies
Definition 1, optimal local privacy, and
.
e??
e?
??
.
sup I(P, Q? ) = log(2d)?log e? + e?? + 2d ? 2 +? ?
e + e?? + 2d ? 2
e? + e?? + 2d ? 2
P
We remark that the additional sampling to guarantee that X # ? {?ei } (where the conditional
distribution Q? is defined) can be accomplished simply: define the random variable X # so that
X # = ei sign(xi ) with probability |xi |/ (x(1 . Evidently E[X # | X] = x, and X ? X # ? Z
for Z distributed according to Q? defines a Markov chain as in our remarks following Theorem 3.
Additionally, an asymptotic expansion allows us to gain a somewhat clearer picture of the values of
the mutual information, though we do not derive upper bounds as we did for Proposition 1. We have
the following corollary, proved in Appendix E.1.
Corollary 3. Let Q? denote the conditional distribution in Proposition 2. Then
0*
)
/ 3
d
d log4 (d)
sup I(P, Q? ) =
.
+
?
min
,
2M 2
M4
d
P
5
Discussion and open questions
This study leaves a number open issues and areas for future work. We study procedures that access
each datum only once and through a perturbed view Zi of the subgradient ?"(Xi , ?), which allows
us to use (essentially) any convex loss. A natural question is whether there are restrictions on the loss
function so that a transformed version (Z1 , . . . , Zn ) of the data are sufficient for inference. Zhou
et al. [33] study one such procedure, and nonparametric data releases, such as those Hall et al. [15]
study, may also provide insights. Unfortunately, these (and other) current approaches require the
data be aggregated by a trusted curator. Our constraints on the privacy-inducing channel distribution
Q require that its support lie in some compact set. We find this restriction useful, but perhaps it
possible to achieve faster estimation rates under other conditions. A better understanding of general
privacy-preserving channels Q for alternative constraints to those we have proposed is also desirable.
These questions do not appear to have easy answers, especially when we wish to allow each provider
of a single datum to be able to guarantee his or her own privacy. Nevertheless, we hope that our view
of privacy and the techniques we have developed herein prove fruitful, and we hope to investigate
some of the above issues in future work.
Acknowledgments We thank Cynthia Dwork, Guy Rothblum, and Kunal Talwar for feedback on
early versions of this work. This material supported in part by ONR MURI grant N00014-11-1-0688
and the U.S. Army Research Laboratory and the U.S. Army Research Office under grant W911NF11-1-0391. JCD was partially supported by an NDSEG fellowship and a Facebook fellowship.
8
References
[1] A. Agarwal, P. Bartlett, P. Ravikumar, and M. Wainwright. Information-theoretic lower bounds on the
oracle complexity of convex optimization. IEEE Trans. on Information Theory, 58(5):3235?3249, 2012.
[2] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31:167?175, 2003.
[3] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. PrenticeHall, Inc., 1989.
[4] P. Billingsley. Probability and Measure. Wiley, Second edition, 1986.
[5] A. Blum, K. Ligett, and A. Roth. A learning theory approach to non-interactive database privacy. In
Proceedings of the Fourtieth Annual ACM Symposium on the Theory of Computing, 2008.
[6] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[7] K. Chaudhuri, C. Moneleoni, and A. D. Sarwate. Differentially private empirical risk minimization.
Journal of Machine Learning Research, 12:1069?1109, 2011.
[8] T. M. Cover and J. A. Thomas. Elements of Information Theory, Second Edition. Wiley, 2006.
[9] J. C. Duchi, M. I. Jordan, and M. J. Wainwright.
Privacy aware learning.
URL
http://arxiv.org/abs/1210.2085, 2012.
[10] C. Dwork. Differential privacy: a survey of results. In Theory and Applications of Models of Computation,
volume 4978 of Lecture Notes in Computer Science, p. 1?19. Springer, 2008.
[11] C. Dwork and J. Lei. Differential privacy and robust statistics. In Proceedings of the Fourty-First Annual
ACM Symposium on the Theory of Computing, 2009.
[12] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis.
In Proceedings of the 3rd Theory of Cryptography Conference, p. 265?284, 2006.
[13] A. V. Evfimievski, J. Gehrke, and R. Srikant. Limiting privacy breaches in privacy preserving data mining.
In Proceedings of the Twenty-Second Symposium on Principles of Database Systems, p. 211?222, 2003.
[14] R. M. Gray. Entropy and Information Theory. Springer, 1990.
[15] R. Hall, A. Rinaldo, and L. Wasserman.
Random differential privacy.
URL
http://arxiv.org/abs/1112.2680, 2011.
[16] J. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms. Springer, 1996.
[17] S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. Smith. What can we learn
privately? SIAM Journal on Computing, 40(3):793?826, 2011.
[18] L. Le Cam. On the asymptotic theory of estimation and hypothesis testing. Proceedings of the Third
Berkeley Symposium on Mathematical Statistics and Probability, p. 129?156, 1956.
[19] L. Le Cam. Convergence of estimates under dimensionality restrictions. Annals of Statistics, 1(1):38?53,
1973.
[20] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, 1983.
[21] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[22] R. R. Phelps. Lectures on Choquet?s Theorem, Second Edition. Springer, 2001.
[23] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal
on Control and Optimization, 30(4):838?855, 1992.
[24] B. I. P. Rubinstein, P. L. Bartlett, L. Huang, and N. Taft. Learning in a large function space: privacypreserving mechanisms for SVM learning. Journal of Privacy and Confidentiality, 4(1):65?100, 2012.
[25] L. Sankar, S. R. Rajagopalan, and H. V. Poor. An information-theoretic approach to privacy. In The 48th
Allerton Conference on Communication, Control, and Computing, p. 1220?1227, 2010.
[26] A. Smith. Privacy-preserving statistical estimation with optimal convergence rates. In Proceedings of the
Fourty-Third Annual ACM Symposium on the Theory of Computing, 2011.
[27] A. W. van der Vaart. Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics.
Cambridge University Press, 1998. ISBN 0-521-49603-9.
[28] A. Wald. Contributions to the theory of statistical estimation and testing hypotheses. Annals of Mathematical Statistics, 10(4):299?326, 1939.
[29] S. L. Warner. Randomized response: a survey technique for eliminating evasive answer bias. Journal of
the American Statistical Association, 60(309):63?69, 1965.
[30] L. Wasserman and S. Zhou. A statistical framework for differential privacy. Journal of the American
Statistical Association, 105(489):375?389, 2010.
[31] Y. Yang and A. Barron. Information-theoretic determination of minimax rates of convergence. Annals of
Statistics, 27(5):1564?1599, 1999.
[32] B. Yu. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, p. 423?435. Springer-Verlag, 1997.
[33] S. Zhou, J. Lafferty, and L. Wasserman. Compressed regression. IEEE Transactions on Information
Theory, 55(2):846?866, 2009.
[34] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings
of the Twentieth International Conference on Machine Learning, 2003.
9
| 4505 |@word private:13 version:8 eliminating:1 polynomial:1 norm:9 justice:1 suitably:1 open:2 willing:1 seek:1 sgd:2 thereby:1 contains:1 series:1 ours:2 past:1 wainwrig:1 current:1 must:13 john:1 subsequent:1 numerical:1 designed:1 ligett:1 juditsky:2 selected:1 leaf:1 smith:4 provides:3 characterization:3 bijection:3 kasiviswanathan:2 allerton:1 org:2 mathematical:2 differential:5 symposium:5 prove:1 combine:1 owner:3 paragraph:2 introduce:1 privacy:75 theoretically:1 sacrifice:1 mask:1 indeed:4 expected:3 warner:3 multi:1 begin:2 discover:1 bounded:5 moreover:3 provided:2 maximizes:2 what:2 substantially:1 developed:1 finding:1 jduchi:1 guarantee:14 berkeley:4 quantitative:1 interactive:1 rm:1 control:2 medical:1 grant:2 appear:1 bertsekas:1 before:2 engineering:1 local:19 infima:1 limit:2 consequence:2 establishing:1 approximately:1 rothblum:1 studied:1 challenging:1 limited:2 nemirovski:2 range:2 confidentiality:1 unique:4 acknowledgment:1 testing:2 communicated:5 sq:1 procedure:22 footprint:1 area:1 empirical:2 drug:1 universal:3 suprema:1 attain:1 matching:3 boyd:1 evasive:1 regular:5 cannot:2 onto:1 convenience:1 risk:12 context:1 impossible:1 applying:1 raskhodnikova:1 restriction:4 measurable:1 map:1 fruitful:1 roth:1 zinkevich:1 send:1 exposure:1 go:1 convex:14 survey:4 wasserman:4 communicating:1 estimator:2 continued:1 insight:1 vandenberghe:1 his:2 population:1 notion:2 coordinate:2 limiting:1 annals:3 controlling:1 user:1 programming:2 us:1 hypothesis:2 kunal:1 element:2 satisfying:2 muri:1 database:2 ep:5 electrical:1 capture:2 worst:4 substantial:2 complexity:3 ui:9 cam:4 personal:1 efficiency:1 learner:4 chapter:4 surrounding:1 alphabet:1 query:2 rubinstein:2 aggregate:1 choosing:1 disguised:2 whose:4 larger:1 distortion:2 say:2 compressed:1 statistic:9 vaart:1 online:3 advantage:1 evidently:2 isbn:1 hiriart:1 maximal:1 fourty:2 date:1 chaudhuri:2 achieve:2 intuitive:1 inducing:1 competition:2 differentially:5 convergence:17 generating:1 guaranteeing:2 derive:3 develop:1 clearer:1 measured:2 eq:4 come:1 implies:1 quantify:1 direction:1 radius:5 stochastic:15 public:1 material:1 require:3 taft:1 fix:1 proposition:11 duchi1:1 extension:1 hold:2 hall:3 exp:1 great:1 mapping:1 achieves:1 adopt:1 smallest:1 early:1 uniqueness:1 estimation:14 favorable:1 evfimievski:3 lose:1 label:1 lucien:1 gehrke:1 tool:1 trusted:1 minimization:6 hope:3 brought:1 zhou:4 ej:1 office:1 corollary:8 release:1 focus:2 notational:1 likelihood:1 contrast:1 attains:3 sense:3 whence:1 inference:3 abstraction:1 dependent:1 streaming:1 her:1 transformed:1 interested:1 issue:3 among:3 classification:1 retaining:1 development:1 special:1 mutual:15 marginal:1 field:1 aware:2 construct:1 having:2 once:1 sampling:3 broad:1 look:1 yu:1 future:2 few:1 randomly:1 preserve:2 divergence:1 individual:4 m4:1 beck:1 festschrift:1 statistician:1 maintain:1 recalling:1 freedom:1 ab:2 interest:1 centralized:1 dwork:6 investigate:1 mining:1 extreme:6 mcsherry:1 chain:2 necessary:3 continuing:1 logarithm:1 desired:2 compelling:1 teboulle:1 cover:1 zn:1 uniform:2 masked:1 kullbackleibler:1 characterize:5 learnability:1 optimally:2 answer:2 perturbed:5 eec:1 teacher:1 referring:1 unbiasedness:1 international:1 fundamental:1 randomized:2 sensitivity:1 siam:3 lee:1 probabilistic:1 receiving:2 michael:1 prenticehall:1 central:1 satisfied:1 ndseg:1 choose:1 possibly:1 huang:1 guy:1 disguise:1 american:2 return:1 supp:7 attaining:1 sec:1 includes:1 int:1 inc:1 satisfy:2 explicitly:3 view:8 analyze:1 doing:1 sup:9 recover:1 participant:2 parallel:2 complicated:1 contribution:1 ass:1 ni:6 improvable:1 who:3 yield:2 none:1 provider:5 randomness:2 history:1 whenever:2 facebook:1 definition:14 infinitesimal:1 associated:1 proof:5 di:1 billingsley:1 sampled:1 gain:1 dataset:2 zdq:2 rld:2 proved:1 recall:2 knowledge:1 dimensionality:1 formalize:1 back:3 focusing:2 appears:1 higher:2 attained:1 tension:2 response:2 maximally:1 formulation:1 evaluated:1 though:6 generality:1 receives:1 trust:1 ei:10 nonlinear:1 cm1:1 defines:1 logistic:1 infimum:3 perhaps:1 gray:1 lei:2 usa:1 effect:4 calibrating:1 concept:1 true:1 requiring:1 contain:1 verify:1 unbiased:1 multiplier:1 symmetric:1 laboratory:1 game:2 uniquely:1 essence:1 generalized:1 theoretic:5 demonstrate:1 duchi:1 recently:1 common:1 rl:4 volume:1 sarwate:1 belong:5 association:2 m1:10 jcd:1 refer:1 cambridge:3 smoothness:1 rd:23 outlined:1 mathematics:1 fano:1 inclusion:1 access:6 similarity:1 longer:1 align:1 own:2 recent:1 perspective:1 inf:9 belongs:1 certain:5 n00014:1 verlag:1 inequality:4 binary:1 onr:1 life:1 accomplished:1 der:1 seen:1 preserving:4 additional:2 somewhat:2 preceding:2 rotationally:4 employed:1 surely:1 determine:1 aggregated:1 preservation:1 multiple:1 desirable:1 match:2 adapt:1 calculation:1 offer:1 long:4 faster:1 determination:1 ravikumar:1 dkl:1 involving:1 wald:2 variant:2 regression:2 essentially:1 metric:1 expectation:1 arxiv:2 histogram:1 sometimes:1 agarwal:1 background:1 addition:1 fellowship:2 median:1 source:2 probably:1 ascent:1 subject:3 privacypreserving:1 lafferty:1 jordan:2 unitary:1 yang:1 easy:1 zi:16 polyak:1 idea:1 dl2:2 tradeoff:9 whether:3 motivated:1 utility:5 bartlett:2 url:2 phelps:1 remark:8 useful:2 detailed:1 amount:8 nonparametric:1 locally:4 http:2 specifies:1 shapiro:1 exist:2 sankar:2 bahadur:1 srikant:1 sign:1 key:1 nevertheless:1 blum:2 achieving:3 drawn:1 lan:1 rewriting:1 kept:2 asymptotically:1 subgradient:8 talwar:1 letter:1 powerful:1 striking:1 place:1 throughout:1 reader:1 family:2 reasonable:1 almost:1 decision:2 appendix:3 bound:23 datum:6 oracle:1 annual:3 ahead:1 precisely:1 deficiency:1 constraint:8 argument:1 min:2 formulating:1 optimality:1 subgradients:1 martin:1 department:2 developing:1 according:4 ball:12 poor:1 belonging:1 conjugate:1 across:2 slightly:2 deferring:2 presently:1 intuitively:1 invariant:4 taken:5 computationally:1 turn:4 describing:1 mechanism:3 know:1 urruty:1 studying:1 operation:1 permit:1 apply:4 obey:1 barron:1 alternative:1 original:2 substitute:2 denotes:2 thomas:1 include:1 choquet:1 maintaining:2 hinge:1 giving:2 build:1 establish:2 uj:2 classical:1 especially:1 objective:1 already:1 quantity:2 question:3 dependence:1 unclear:1 exhibit:1 gradient:21 thank:1 polytope:1 nissim:2 extent:1 trivial:1 index:1 providing:4 minimizing:2 ratio:1 difficult:1 unfortunately:1 statement:3 sharper:2 relate:1 negative:1 design:2 twenty:1 allowing:2 upper:12 m12:1 markov:2 finite:1 descent:7 defining:2 communication:4 precise:1 perturbation:4 confidential:1 sharp:6 arbitrary:1 pair:1 kl:1 z1:1 california:1 learned:1 herein:1 trans:1 address:2 able:4 suggested:1 adversary:1 rajagopalan:1 max:1 memory:1 explanation:1 wainwright:2 natural:11 rely:1 quantification:1 minimax:10 picture:1 breach:2 literature:1 understanding:2 curator:1 asymptotic:4 loss:42 fully:1 bear:1 lecture:2 foundation:1 degree:1 sufficient:3 principle:1 obscure:1 echal:1 cancer:1 supported:4 enjoys:1 tsitsiklis:1 formal:2 allow:3 understand:1 bias:1 tracing:1 distributed:4 van:1 feedback:1 dimension:1 xn:5 yudin:1 computes:1 concavity:1 collection:7 coincide:1 regressors:1 projected:1 far:1 transaction:1 compact:8 obtains:1 supremum:2 xi:29 decade:1 additionally:2 learn:3 nature:1 channel:8 ca:1 controllable:1 robust:2 obtaining:1 expansion:1 protocol:1 did:1 main:6 privately:1 noise:2 arise:1 edition:3 allowed:1 cryptography:1 x1:5 body:1 deployed:1 wiley:3 formalization:1 sub:3 wish:8 lie:1 third:3 theorem:31 specific:2 cynthia:1 appeal:1 svm:1 closeness:1 concern:1 consist:1 essential:1 mirror:4 conditioned:1 entropy:5 intersection:1 logarithmic:1 simply:1 saddle:9 explore:1 army:2 twentieth:1 rinaldo:1 contained:1 partially:1 applies:2 springer:5 satisfies:12 discourse:1 acm:3 assouad:1 conditional:14 goal:5 viewed:1 quantifying:2 acceleration:1 lipschitz:1 lemar:1 jordan1:1 averaging:1 kearns:1 decisiontheoretic:1 formally:1 log4:1 support:1 wainwright1:1 |
3,873 | 4,506 | Multiplicative Forests for Continuous-Time Processes
Jeremy C. Weiss
University of Wisconsin
Madison, WI 53706, USA
Sriraam Natarajan
Wake Forest University
Winston Salem, NC 27157, USA
David Page
University of Wisconsin
Madison, WI 53706, USA
[email protected]
[email protected]
[email protected]
Abstract
Learning temporal dependencies between variables over continuous time is an
important and challenging task. Continuous-time Bayesian networks effectively
model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop
a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative
assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned
from few temporal trajectories with large gains in performance and scalability.
1 Introduction
The modeling of temporal dependencies is an important and challenging task with applications in
fields that use forecasting or retrospective analysis, such as finance, biomedicine, and anomaly detection. While analyses over time series data with fixed, discrete time intervals are well studied, as
for example in [1], there are domains in which discretizing the time leads to intervals where no observations are made, producing ?missing data? in those periods, or there is no natural discretization
available and so the time series assumptions are restrictive. Of note, experiments in previous work
provide evidence that coercing continuous-time data into time series and conducting time series
analysis is less effective than learning models built with continuous-time data in mind [2].
We investigate a subset of continuous-time models: probabilistic models over finite event spaces
across continuous time. The prevailing model in this field is the continuous-time Markov process
(CTMP), a model that provides an initial distribution over states and a rate matrix parameterizing the
rate of transitioning between states. However, this model does not scale for the case where a CTMP
state is a joint state over many variable states. Because the number of joint states is exponential
in the number of variables, the size of the CTMP rate matrix grows exponentially in the number
of variables. Continuous-time Bayesian networks (CTBNs), a family of CTMPs with a factored
representation, encode rate matrices for each variable and the dependencies among variables [3].
Figure 1 shows a complete trajectory, i.e., a timeline where the state of each variable is known for
all times t, for a CTMP with four joint states (a, b), (a, B), (A, b), and (A, B) factorized into two
binary CTBN variables ? and ? (with states a and A, and b and B, respectively).
Previous work on CTBNs includes several approaches to performing CTBN inference [4, 5, 6, 7, 8]
and learning [2, 3]. Briefly, CTBNs do not admit exact inference without transformation to the
exponential-size CTMP. Approximate inference methods including expectation propagation [4],
mean field [6], importance sampling-based methods [7], and MCMC [8] have been applied, and
while these methods have helped mitigate the inference problem, inference in large networks remains a challenge. CTBN learning involves parameter learning using sufficient statistics (e.g. numbers of transitions M and durations T in Figure 1) and structure learning over a directed (possibly
cyclic) graph over the variables to maximize a penalized likelihood score. Our work addresses learning in a generalized framework to which the inference methods mentioned above can be extended.
1
In this work we introduce a generalization of CTBNs: partition-based CTBNs. Partition-based
CTBNs remove the restriction used in CTBNs of storing one rate matrix per parents setting for
every variable. Instead partition-based CTBNs define partitions over the joint state space and define
the transition rate of each variable to be dependent on the membership of the current joint state
to an element (part) of a partition. As an example, suppose we have partition P composed of
parts p1 = {(a, b), (A, b)} and p2 = {(a, B), (A, B)}. Then the transition into si from joint state
(A, B) in Figure 1 would be parameterized by transition rate qa|p2 . Partition-based CTBNs store
one transition rate per part, as opposed to one transition rate matrix per parents setting. Later we
will show that, for a particular choice of partitions, a partition-based CTBN is equivalent to a CTBN.
However, the more general framework offers other choices of partitions which may be more suitable
for learning from data.
Partition-based CTBNs avoid one limitation of
CTBNs: that the model size is necessarily exponential in the maximum number of parents per
variable. For networks with sparse incoming connections, this issue is not apparent. However,
in many real domains, a variable?s transition rate
may be a function of many variables.
Given the framework of partition-based CTBNs,
we need to provide a way to determine useful
partitions. Thus, we introduce partition-based
CTBN learning using regression tree modifications in place of CTBN learning using graph operators of adding, reversing, and deleting edges.
In the spirit of context-specific independence [9],
we can view tree learning as a method for learning compact partition-based dependencies. However, tree learning induces recursive subpartitions, which limits their ability to partition the
joint state space. We therefore introduce multiplicative forests for CTBNs, which allow the
model to represent up to an exponential number
of transition rates with parameters still linear in
the number of splits.
si
A
a
ti
Ma|B
Ta|B
B
b
X
Mb
Tb
Time
Figure 1: Example of a complete trajectory in
a two-node CTBN. The arrows show the transitions and time intervals that are aggregated to
compute selected sufficient statistics (M?s and
T?s). A and a denote two states for one variable,
and B and b two states for a second variable.
Following canonical tree learning methods, we perform greedy tree and forest learning using iterative structure modifications. We show that the partition-based change in log likelihood can be
calculated efficiently in closed form using a multiplicative assumption. We also show that using
multiplicative forests, we can efficiently calculate the ML parameters. Thus, we can calculate the
maximum change in log likelihood for a forest modification proposal, which gives us the best iterative update to the forest model.
Finally, we conduct experiments to compare CTBNs, regression tree CTBNs (treeCTBNs) and multiplicative forest CTBNs (mfCTBNs) on three data sets. Our hypothesis is twofold: first, that learning treeCTBNs and mfCTBNs will scale better towards large domains because of their compact
model structures, and second, that mfCTBNs will outperform both CTBNs and treeCTBNs with
fewer data points because of their ability to capture multiplicative dependencies.
The rest of the paper is organized as follows: in Section 2 we provide background on CTBNs. In
Section 3 we present partition-based CTBNs, show that they subsume CTBNs and define the partitions that tree and forest structures induce. We also describe theoretical advantages of using forests
for learning and how to learn these models efficiently. We present results in Section 4 showing that
forest CTBNs are scalable to large state spaces and learn better than CTBNs, from fewer examples
and in less time. Finally, in Sections 5 and 6 we identify connections to functional gradient boosting and related continuous-time processes and discuss how our work addresses one limitation that
prevents CTBNs from finding widespread use.
2
2 Background
CTBNs are probabilistic graphical models that capture dependencies between variables over continuous time. A CTBN is defined by 1) a distribution for the initial state over variables X given
by a Bayesian Network B, and 2) a directed (possibly cyclic) graph over variables X with a set of
Conditional Intensity Matrices (CIMs) for each variable X ? X that hold the rates (intensities) qx|u
of variable transitions given their parents UX in the directed graph. Here a CTBN variable X ? X
has states x1 , . . . , xk , and there is an intensity qx|u for every state x ? X given an instantiation
over its parents u ? UX . The intensity corresponds to the rate of transitioning out of state x; the
probability density function for staying in state x given an instantiation of parents u is qx|u e?qx|u t .
Given a transition, X moves to some other state x0 with probability ?xx0 |u . Taking the product over
intervals bounded by single transitions, we obtain the CTBN trajectory likelihood:
Y Mxx0 |u
Y Y Y Mx|u
qx|u e?qx|u Tx|u
?xx0 |u
x0 6=x
X?X x?X u?UX
where the Mx|u and Mxx0 |u are the sufficient statistics indicating the number of transitions out of
state x (total, and to x0 , respectively), and the Tx|u are the sufficient statistics for the amount of time
spent in x given the parents are in state u.
3 Partition-based CTBNs
Here we define partition-based CTBNs, an alternative framework for determining variable transition
rates. We give the syntax and semantics of our model, providing the generative model and likelihood
formulation. We then show that CTBNs are one instance in our framework. Next, we introduce
regression trees and multiplicative forests and describe the partitions they induce, which are then
used in the partition-based CTBN framework. Finally, we discuss the advantages of using trees and
forests in terms of learning compact models efficiently.
Let X be a finite set of discrete variables X of size n, with each variable X having a discrete
set of states {x1 , x2 , . . . , xk }, where k may differ for each variable. We define a joint state
s = {x1 , x2 , . . . , xn } over X where the subscript indicates the variable index. We also define
the partition space P = X 1 . We will shortly define set partitions P over P, composed of disjoint
parts p, each of which holds a set of elements s.
Next we define the dynamics of the model, which form a continuous-time process over X . Each
variable X transitions among its states with rate parameter qx0 |s for entering state x0 given the joint
state s2 . This rate parameter (called an intensity) parameterizes the exponential distribution for
transitioning into x0 , given by the pdf: p(x0 , s, t) = qx0 |s e?qx0 |s t for time t ? [0, ?).
A partition-based CTBN has a collection of set partitions P over P, one Px0 for every variable state
x0 . For shorthand, we will often denote p = Px0 (s) to indicate the part p of partition Px0 to which
state s belongs. We define the intensity parameter as qx0 |s = qx0 |p for all s ? p. Note that this fixes
this intensity to be the same for every s ? p, and also note that the set of parts p covers P. The pdf
for transitioning is given by p(x0 , s, t) = p(x0 , Px0 (s), t) = qx0 |p e?qx0 |p t for all s in p.
Now we are ready to define the partition-based CTBN model. A partition-based CTBN model M is
composed of a distribution over the initial state of our variables, defined by a Bayesian network B,
and a set of partitions Px0 for every variable state x0 with corresponding sets of intensities qx0 |p .
The partition-based CTBN provides a generative framework for producing a trajectory z defined by a
sequence of (state, time) pairs (si , ti ). Given an initial state s0 , transition times are sampled for each
variable state x0 according to p(x0 , Px0 (s0 ), t). The next state is selected based on the transition to
the x0 with the shortest time, after which the transition times are resampled according to p(x0 , si , t).
Due to the memoryless property of exponential distributions, no resampling of the transition time
for x0 is needed if p(x0 , si , t) = p(x0 , si?1 , t). The trajectory terminates when all sampled transition
times exceed a specified ending time.
1
Note we can generalize this to larger spaces P = R ? X , where R is an external state space as in [10].
but for our analysis we restrict R to be a single element r, i.e. P ?
= X.
2
Of note, partition-based CTBNs are modeling the intensity of transitioning to the recipient state x0 , rather
than from the donor state x because we are more often interested in the causes of entering a state.
3
Given a trajectory z, we can also define the model likelihood. For each interval ti , the joint state
remainsQ
unchanged,
and then one variable transitions into x0 . The likelihood given the interval is:
Q
qx0 |si?1 X x?X e?qx|si?1 ti , i.e., the product of the probability density for x0 and the probability
that no other variable transitions before ti . Taking the product over all intervals in z, we get the
model likelihood:
Y Y Y Mx0 |s
(1)
qx0 |s e?qx0 |s Ts
X?X x0 ?X s
where Mx0 |s is the number of transitions into x0 from state s, and Ts is the total duration
spent in
P
s. Combining terms based on the membership of s to p and defining Mx0 |p = s?p Mx0 |s and
P
Tp = s?p Ts , we get:
Y Y Y Mx0 |p
Eq.(1) =
qx0 |p e?qx0 |p Tp
X?X x0 ?X p?Px0
3.1 CTBN as a partition-based CTBN
Here we show that CTBNs can be viewed as an instance of partition-based CTBNs. Each variable
X is given a parent set UX , and the transition intensities qx|u are recorded for leaving donor states
x given the current setting of the parents u ? UX . The CTBN likelihood can be shown to be:
Y Y Y
Y Mxx0 |u
e?qx|u Tx|u
(2)
qxx0 |u
x0 6=x
X?X x?X u?UX
as in [5], where qxx0 |u and Mxx0 |u denote
the intensity and number of transitions from state x to
P
state x0 given parents setting u, and x0 6=x qxx0 |u = qx|u . Rearranging the product from equation
2, we achieve a likelihood in terms of recipient states x0 :
Y Y Y Y Mxx0 |u
Eq. (2) =
qxx0 |u e?qxx0 |u Tx|u
X?X x?X u?UX x0 6=x
=
Y Y Y
M
0
x |p ?qx0 |p Tp
e
qx0 |p
(3)
X?X x0 ?X p?Px0
where we define p as {x}?{u}?(X \(X ?UX )) in each partition Px0 , and likewise: qx0 |p = qxx0 |u ,
Mx0 |p = Mxx0 |u , and Tp = Tx|u . Thus, CTBNs are one instance of partition-based CTBNs, with
partitions corresponding to a specified donor state x and parents setting u.
3.2 Tree and forest partitions
Trees and forests induce partitions over a space defined by the set of possible split criteria [11]. Here
we will define the Conditional Intensity Trees (CITs): regression trees that determine the intensities
qx0 |p by inducing a partition over P. Similarly, we will define Conditional Intensity Forests (CIFs),
where tree intensities are named intensity factors whose product determines qx0 |p . An example of a
CIF, composed of a collection of CITs, is shown later in the experiment results in Figure 4.
Formally, a Conditional Intensity Tree (CIT) fx0 is a directed tree structure on a graph G(V, E) with
nodes V and edges E(Vi , Vj ). Internal nodes Vi of the tree hold splits ?Vi = (?Vi , {E(Vi , ?)})
composed of surjective maps ?Vi : s 7? E(Vi , Vj ) and lists of the outgoing edges. The maps ?
induce partitions over P and endow each outgoing edge E(Vi , Vj ) with part pVj . External nodes
l, or leaves, hold non-negative real values qxCIT
0 |p called intensities. A path ? from the root to a leaf
T
induces a part p, which is the intersection of the parts on the edges of the path: p = E(Vi ,Vj )?? pVj .
The parts corresponding to paths of a CIT form a partition over P, which can be shown easily using
induction and the fact that the maps ?Vi induce disjoint parts pVj that cover P.
A Conditional Intensity Forest (CIF) Fx0 is a set of CITs {fx0 }. Because the parts of each CIT
form a partition, a CIF induces a joint partition over P where a part p is the set of states s that have
the same paths through all CITs. Finally, a CIF produces intensities
from joint states by taking the
Q
CIT
q
product over the intensity factors from each CIT: qxCIF
0
0 |pCIF =
fx0 x |pCIT .
4
Using regression trees and forests can greatly reduce the number of model parameters. In CTBNs,
the number of parameters grows exponentially in the number of parents per node. In tree and forest
CTBNs, the number of parameters may be linear in the number of parents per node, exploiting the
efficiency of using partitions. Notably, however, tree CTBNs are limited to having one intensity
per parameter. In forest CTBNs, the number of intensities can be exponential in the number of
parameters. Thus, the forest model has much greater potential expressivity per parameter than the
other models. We quantify these differences in the Supplementary Materials at our website.
3.3 Forest CTBN learning
Here we discuss the reasoning for using the multiplicative assumption and derive the changes in likelihood given modifications to the forest structure. Previous forests learners have used an additive
assumption, e.g. averaging and aggregating, thereby taking advantage of properties of ensembles
[12, 13]. However, if we take the sum over the intensity factors from each tree, there are no direct
methods for calculating the change in likelihood aside from calculating the likelihood before and after a forest modification, which would require scanning the full data once per modification proposal.
Furthermore, summing intensity factors could lead to intensities outside the valid domain [0, ?).
Instead we use a multiplicative assumption since it gives us the correct range over intensities. As we
show below, using the multiplicative assumption also has the advantage that it is easy to compute
the change in log likelihood with changes in forest structure. Consider a partition-based CTBN
M = (B, {Fx0 }) where the partitions Px0 and intensities qx0 |p are given by the CIFs {Fx0 }. We
focus on change in forest structure for one state x0 ? X and remove x0 from the subscript notation
for simplicity. Given a current forest structure F and its partition P , we formulate the change in
likelihood by adding a new CIT f 0 and its partition P 0 . One example of f 0 is a new a one-split stub.
Another example of f 0 is a tree copied to have the same structure as a CIT f in F with all intensity
factors set to one, except at one leaf node where a split is added. This is equivalent to adding a split
to f . We denote P? as the joint partition of P and P 0 and parts p? ? P? , p ? P , and p0 ? P 0 . We
consider the change in log likelihood ?LL given the new and old models:
X
X
?LL = (
Mp? log qp? ? qp?Tp?) ? (
Mp log qp ? qp Tp )
p
p?
X
X
=(
Mp?(log qp0 + log qp ) ? qp?Tp?) ? (
Mp log qp ? qp Tp )
p?
X
X
=(
Mp? log qp0 ? qp?Tp?) +
qp Tp
p?
=
X
Mp0 log qp0 ?
X
p0
p
qp?Tp? +
X
p
qp Tp
(4)
p
p?
P
P
P
We make use of the multiplicative assumption that qp? = qp0 qp and p Mp = p0 Mp0 = p? Mp?
to arrive at equation 4. The first and third terms are easy to compute given the old intensities and
new intensity factors. The second term is slightly more complicated:
X
X
X
X
qp?Tp? =
qp0 qp Tp? =
qp0
qp Tp?
p?
p0
p?
p?p
? 0
We introduce the notation p? ? p0 to denote the parts p? that correspond to the part p0 . The second
term is a summation over parts p?; we have simply grouped together terms by membership in p0 .
The number of parts in the joint partition set P? can be exponentially large, but the only
P remaining
dependency on the joint partition space in the change in log likelihood is the term p?p
? 0 qp Tp?.
We can keep track of this value as we progress through the trajectories, so the actual time cost is
linear in the number of trajectory intervals. Thinking of intensities q as rates, and given durations
T , we P
observe that the second
numbers of transitions:
P
P and third terms in equation 4 are expected
qp Tp?. Specifically, the
Ep? = p? qp?Tp? and Ep = p qp Tp . We additionally define Ep0 = p?p
0
?
expectations Ep0 and Ep are the expected number of transitions in part p0 and p using the old model
intensities, respectively, whereas Ep? is the expected number of transitions using the new intensities.
5
3.4 Maximum-likelihood parameters
The change in log likelihood is dependent on the intensity factor values {qp0 } we choose for the new
partition. We calculate the maximum likelihood parameters by setting the derivative with respect to
M 0
M 0
these factors to zero to get qp0 = P 0p qp Tp? = E p0 . Following the derivation in [2], we assign
p
p?p
?
priors to the sufficient statistics calculations. Note, however, that the priors affect the multiplicative
intensity factors, so a tree may split on the same partition set twice to get a stronger effect on the
intensity, with the possible risk of undesirable overfitting.
3.5 Forest implementation
We use greedy likelihood maximization steps to learn multiplicative forests (mfCTBNs). Each iteration requires repeating three steps: (re)initialization, sufficient statistics updates, and model updates.
Initially we are given a blank forest Fx0 per state x0 containing a blank tree fx0 , that is, a single root
node acting as a leaf with an intensity factor of one. We also are given sets of possible splits {?} and
a penalty function ?(|Z|, |M|) to penalize increased model complexity. First, for every leaf l in M,
we (re)initialize the sufficient statistics Ml and El in M, as well as sufficient statistics for potential
forest modifications: Ml,? , El,? , ?l, ?. Then, we traverse each of our trajectories z ? Z to update
each leaf. For every (state, duration) pair (si , ti ), where ti is the time spent in state si?1 before the
transition to si , we update the sufficient statistics that compose equation 4. Finally, we compute the
change in likelihood for possible forest modifications, and choose the modification with the greatest
score. If this score is greater than the cost of the additional model complexity, ?, we accept the
modification. We replace the selected leaf with a branch node split upon the selected ?. The new
leaf intensity factors are the product of the old intensity (factor) ql and the intensity factor qp0 .
Unlike most forest learning algorithms, mfCTBNs learn trees neither in series nor in parallel. Notably, the best split is determined solely by the change in log likelihood, regardless of the tree to
which it belongs. If it belongs to the blank tree at the end of the forest, that tree produces non-trivial
factors and a new blank tree is appended to the forest. In this way, as mfCTBN learns, it automatically determines the forest size and tree depth according to the evidence in the data. We provide
code and Supplementary Materials at our website.
4 Experiments
We evaluate our tree learning and forest learning algorithms on samples from three models.
smoking
The first model, which we call ?Nodelman?,
is the benchmark model developed in [3, 2].
gender
bmi
age
The second is a simplified cardiovascular health
model we call ?CV health? shown in Figure
HDL
blood pressure
2. The cause of pathologies in this field are
known to be multifactorial [14]. For example, it
glucose level
has been well-established that independent posatherosclerosis
arrhythmia
itive risk factors for atherosclerosis include being male, a smoker, in old age, having high glustroke
MI
abnormal heart electrophysiology
cose, high BMI, and high blood pressure. The
primary tool for prediction in this field is risk
thrombolytic therapy
troponin levels
chest pain
factor analysis, where transformations over the
product of risk factor values determines overall
risk. The third model we call ?S100? is a large- Figure 2: The cardiovascular health (CV health)
scale model with one hundred binary variables. structure used in experiments.
Parents are determined by the binomial distribution B(0.05, 200) over variable states, with
intensity factor ratios of 1 : 0.5. Our goal is to show that treeCTBNs and mfCTBNs can scale to
much larger model types and still learn effectively. In our experiments we set the potential splits
{?} to be the set of binary splits determined by indicators for each variable state x0 . We set ? to be
zero and terminate model learning when the tune set likelihood begins to decrease.
6
100
1000
Trajectories
10000
?50
?100
log Likelihood
?200
?70
10
Truth
TreeCTBN
mfCTBN
?150
?40
?60
?80
Truth
TreeCTBN
mfCTBN
N?CTBN
?100
log Likelihood
0
?20
?20
?30
?40
?50
log Likelihood
?60
Truth
TreeCTBN
mfCTBN
N?CTBN
10
100
1000
Trajectories
10000
10
100
1000
10000
Trajectories
Figure 3: Average testing set log likelihood varying the training set size for each model: Nodelman
(left), CV health (center), and S100 (right). N-CTBN averages are omitted on the S100 model as
one third of the runs did not terminate.
We compare our algorithms against the learning algorithm presented in [2] using code from [15],
which we will call N-CTBN. N-CTBNs perform a greedy Bayesian structure search, adding, removing, or reversing arcs to maximize the Bayesian information criterion score, a tradeoff between the
likelihood and a combination of parameter and data size. Our algorithms use a tune set by sieving
off one quarter of the original training set trajectories. We use the same Laplace prior as used in
[15]. We use the same training and testing set for each algorithm. The trajectories are sampled
from the ground truth models for durations 10, 10 and 2 units of time, respectively. We evaluate the
three models using the testing set average log likelihood. To provide an experimental comparison
of model performance, we choose to analyze the p-values for a two-sided paired t-test for the average log likelihoods between mfCTBNs and N-CTBNs for each training set size. The results come
from testing sets with one thousand sampled trajectories. Additional evaluation criteria assessing
structural recovery were also analyzed and are provided in the Supplementary Materials.
4.1 Results
Figure 3 (left) shows that the mfCTBN substantially outperforms both the treeCTBN and the NCTBN on the Nodelman model in terms of average log likelihood. This effect is most pronounced
with relatively few trajectories, suggesting that mfCTBNs are able to learn more quickly than either
of the other models.
We observe an even larger difference between the mfCTBN and the other models in the CV health
model in Figure 3 (center). With relatively few trajectories, the mfCTBN is able to identify the
multifactorial causes as observed in the high log likelihood and structural recall. For runs with
fewer than 500 training set trajectories, many N-CTBN models have nodes including every other
node as a parent, requiring the estimation of about 300,000 parameters on average, shown in the
Supplementary Materials. Figure 3 (right) shows that mfCTBNs can effectively learn dense models
an order of magnitude larger than those previously studied. The expected number of parents per node
in the S100 model is approximately 20. In order to exactly reconstruct the S100 model, a traditional
CTBN would then need to estimate 221 intensity values. For many applications, variables need
more parents than this. We observe that N-CTBNs have difficulty scaling to models of this size.
The N-CTBN learning time on this data set ranges from 4 hours to more than 3 days; runs were
stopped if they had not terminated in that time. About one third of the runs failed to complete,
and the runs that did complete suggested that N-CTBN performed poorly, similar to the differences
observed in the CV health experiment. We suspect the algorithm may be similarly building nodes
with many parents; the model might need to estimate 2100 parameters, a bottleneck at minimum. By
comparison, all runs using treeCTBNs and mfCTBNs completed in less than 1 hour. The averaged
results of N-CTBNs on the S100 model are omitted accordingly.
We tested for significant differences in the average log likelihoods between the N-CTBN and
mfCTBN learning algorithms. In the Nodelman model, the differences were significant at level
of p =1e-10 for sizes 10 through 500, p = 0.05 for sizes 1000 and 5000, and not significant for size
10000. In the CV health model, the differences were significant at p =1e-9 for all training set sizes.
We were unable to generate a t-test comparison of the S100 model.
7
Normal BP
True
1.0
Youth
False
Normal weight
False
2.0
True
1.0
0.10
True
1.0
Normal weight
<50% atherosclerotic
False
2.0
True
0.010
Hypertensive
False True
False
1.3
1.0
0.68
0.020
True
2.0
Normal glucose
False
1.0
False True
2.0
1.0
Male
Frequent smoker
False True
0.020
True
Female
False
Frequent smoker
Normal glucose
False True
0.36
True
Hypertensive
True
0.13
<50% atherosclerotic
False
0.0080
0.18
0.38
1.4
True False
1.1
3.5
Youth
False True
0.050
False
True False
0.12
1.1
Figure 4: Ground truth (left) and mfCTBN forest learnt from 1000 trajectories (right) for intensity/rate of developing severe atherosclerosis.
Figure 4 shows the ground truth forest and the mfCTBN forest learned for the ?severe atherosclerosis? state in the CV health model. To calculate the intensity of transitioning into this state, we
identify the leaf in each forest that matches the current state and take the product of their intensity
factors. Figure 4 (right) shows the recovery of the correct dependencies in approximately the right
ratios. Full forest models can be found in the Supplementary Materials.
5 Related Work
We discuss the relationships between mfCTBNs and related work in two areas: forest learning and
continuous-time processes. Forest learning with a multiplicative assumption is equivalent to forest
learning in the log space with an additive assumption and exponentiating the result. This suggests
that our method shares similarities with functional gradient boosting (FGB), a leading method for
constructing regression forests, run in the log space [16]. However, our method is different in its
direct use of a likelihood-based objective function and in its ability to modify any tree in the forest at
any iteration. Further discussion comparing the methods is provided in the Supplementary Materials.
Several other works that model variable dependencies over continuous time also exist. Poisson process networks and cascades model variable dependencies and event rates [17, 18]. Perhaps the most
closely related work, piecewise-constant conditional intensity models (PCIMs), reframes the concept of a factored CTMP to allow learning over arbitrary basis state functions with trees, possibly
piecewise over time [10]. These models focus on the ?positive class?, i.e. the observation or count
of observations of an event. The trouble with this is that the data used to learn the model may be incomplete. Given a timeline, we receive all observations of events but not necessarily all occurrences
of the events, and we would like to include this uncertainty in our model. For Poisson processes in
particular, the representation of the ?negative? class is missing, when in some cases it is the absent
state of a variable that triggers a process, as for example in the case of gene expression networks and
negative regulation. Finally other related work includes non-parametric continuous-time processes,
which produce exchangeable distributions over transition rate sets in unfactored CTMPs [19].
6 Conclusion
We presented an alternative representation of the dynamics of CTBNs using partition-based CTBNs
instantiated by trees and forests. Our models grow linearly in the number of forest node splits, while
CTBNs grow exponentially in the number of parent nodes per variable. Motivated by the domain
over intensities, we introduced multiplicative forests and showed that CTBN likelihood updates
can be efficiently computed using changes in log likelihood. Finally, we showed that mfCTBNs
outperform both treeCTBNs and N-CTBNs in three experiments and that mfCTBNs are scalable to
problems with many variables. With our contributions in developing scalable CTBNs and efficient
learning, along with continued improvements in inference, CTBNs can be a powerful statistical tool
to model complex processes over continuous time.
7 Acknowledgments
We gratefully acknowledge CIBM Training Program grant 5T15LM007359, NIGMS grant
R01GM097618-01, NLM grant R01LM011028-01, and ICTR NIH NCATS grant UL1TR000427.
8
References
[1] T. Dean and K. Kanazawa, ?A model for reasoning about persistence and causation,? Computational Intelligence, vol. 5, no. 2, pp. 142?150, 1989.
[2] U. Nodelman, C. R. Shelton, and D. Koller, ?Learning continuous time Bayesian networks,? in
UAI, 2003.
[3] U. Nodelman, Continuous time Bayesian networks. PhD thesis, Stanford University, 2007.
[4] U. Nodelman, D. Koller, and C. R. Shelton, ?Expectation propagation for continuous time
Bayesian networks,? in UAI, 2005.
[5] S. Saria, U. Nodelman, and D. Koller, ?Reasoning at the right time granularity,? in UAI, 2007.
[6] I. Cohn, T. El-Hay, N. Friedman, and R. Kupferman, ?Mean field variational approximation
for continuous-time Bayesian networks,? in UAI, 2009.
[7] Y. Fan and C. R. Shelton, ?Sampling for approximate inference in continuous time Bayesian
networks,? in AI and Mathematics, 2008.
[8] V. Rao and Y. Teh, ?Fast MCMC sampling for Markov jump processes and continuous time
Bayesian networks,? in UAI, 2011.
[9] D. Heckerman, ?Causal independence for knowledge acquisition and inference,? in UAI,
pp. 122?127, 1993.
[10] A. Gunawardana, C. Meek, and P. Xu, ?A model for temporal dependencies in event streams,?
in NIPS, 2011.
[11] C. Strobl, J. Malley, and G. Tutz, ?An introduction to recursive partitioning: rationale, application, and characteristics of classification and regression trees, bagging, and random forests.,?
Psychological methods, vol. 14, no. 4, p. 323, 2009.
[12] Y. Freund and R. Schapire, ?A desicion-theoretic generalization of on-line learning and an
application to boosting,? in Computational learning theory, 1995.
[13] L. Breiman, ?Random forests,? Machine learning, vol. 45, no. 1, pp. 5?32, 2001.
[14] W. Kannel, ?Blood pressure as a cardiovascular risk factor,? JAMA, vol. 275, no. 20, p. 1571,
1996.
[15] C. Shelton, Y. Fan, W. Lam, J. Lee, and J. Xu, ?Continuous time Bayesian network reasoning
and learning engine,? JMLR, vol. 11, pp. 1137?1140, 2010.
[16] J. Friedman, ?Greedy function approximation: a gradient boosting machine,? Annals of Statistics, 2001.
[17] S. Rajaram, T. Graepel, and R. Herbrich, ?Poisson-networks: A model for structured point
processes,? in AI and Statistics, 2005.
[18] A. Simma, Modeling Events in Time Using Cascades Of Poisson Processes. PhD thesis, EECS
Department, University of California, Berkeley, Jul 2010.
[19] A. Saeedi and A. Bouchard-Ct, ?Priors over recurrent continuous time processes,? in NIPS,
2011.
9
| 4506 |@word briefly:1 stronger:1 p0:9 pressure:3 thereby:1 initial:4 cyclic:2 series:5 score:4 outperforms:1 current:4 discretization:1 blank:4 comparing:1 si:11 additive:2 partition:57 remove:2 update:8 resampling:1 aside:1 greedy:4 selected:4 fewer:3 generative:2 leaf:9 website:2 accordingly:1 xk:2 intelligence:1 provides:2 boosting:4 node:16 traverse:1 herbrich:1 along:1 direct:2 shorthand:1 compose:1 introduce:5 x0:33 notably:2 expected:4 p1:1 nor:1 arrhythmia:1 mx0:6 automatically:1 actual:1 begin:1 provided:2 bounded:1 notation:2 factorized:1 substantially:1 developed:1 finding:1 transformation:2 temporal:4 mitigate:1 every:8 berkeley:1 ti:7 finance:1 exactly:1 exchangeable:1 unit:1 grant:4 partitioning:1 producing:3 cardiovascular:3 before:3 positive:1 aggregating:1 modify:1 limit:1 cifs:2 subscript:2 path:4 solely:1 approximately:2 might:1 twice:1 initialization:1 studied:2 suggests:1 challenging:2 limited:2 range:2 averaged:1 directed:4 acknowledgment:1 sriraam:1 testing:4 recursive:2 area:1 cascade:2 persistence:1 induce:5 get:4 undesirable:1 operator:1 context:1 risk:6 restriction:1 equivalent:3 map:3 dean:1 missing:2 center:2 regardless:1 duration:5 formulate:1 simplicity:1 recovery:2 factored:2 parameterizing:1 continued:1 laplace:1 annals:1 suppose:1 trigger:1 anomaly:1 exact:1 hypothesis:1 element:3 natarajan:1 donor:3 ep:4 observed:2 capture:2 calculate:4 thousand:1 decrease:1 mentioned:1 complexity:2 dynamic:2 upon:1 efficiency:1 learner:1 basis:1 easily:1 joint:15 tx:5 derivation:1 instantiated:1 fast:1 effective:1 describe:2 outside:1 whose:2 apparent:1 larger:4 supplementary:6 stanford:1 reconstruct:1 ability:3 statistic:11 unfactored:1 advantage:4 sequence:1 lam:1 mb:1 product:9 frequent:2 combining:1 poorly:1 achieve:1 inducing:1 pronounced:1 scalability:1 exploiting:1 parent:20 assessing:1 multifactorial:2 produce:3 staying:1 spent:3 derive:1 develop:1 recurrent:1 progress:1 eq:2 p2:2 c:1 involves:1 indicate:1 come:1 quantify:1 differ:1 closely:1 correct:2 nlm:1 material:6 require:1 coercing:1 assign:1 fix:1 generalization:2 summation:1 hold:4 therapy:1 ground:3 normal:5 omitted:2 estimation:1 grouped:1 ictr:1 tool:2 rather:1 avoid:1 breiman:1 varying:1 endow:1 encode:1 focus:2 improvement:1 likelihood:38 indicates:1 greatly:1 inference:9 dependent:2 el:3 membership:3 accept:1 initially:1 koller:3 interested:1 semantics:1 issue:1 among:2 overall:1 classification:1 prevailing:1 initialize:1 field:6 once:1 having:3 sampling:3 thinking:1 piecewise:2 few:3 causation:1 composed:5 hdl:1 friedman:2 detection:1 investigate:1 evaluation:1 severe:2 male:2 analyzed:1 edge:5 cose:1 tree:36 conduct:1 old:5 incomplete:1 re:2 causal:1 theoretical:1 stopped:1 psychological:1 instance:3 increased:1 modeling:3 rao:1 cover:2 tp:20 maximization:1 cost:2 subset:1 hundred:1 dependency:11 scanning:1 eec:1 learnt:1 density:2 probabilistic:2 off:1 lee:1 together:1 quickly:1 thesis:2 recorded:1 gunawardana:1 opposed:1 choose:3 possibly:3 containing:1 admit:1 external:2 derivative:1 leading:1 suggesting:1 jeremy:1 potential:3 includes:2 salem:1 mcmc:2 mp:7 vi:10 stream:1 multiplicative:17 helped:1 later:2 closed:2 view:1 root:2 px0:10 analyze:1 performed:1 complicated:1 parallel:1 bouchard:1 jul:1 contribution:1 appended:1 t15lm007359:1 conducting:1 rajaram:1 efficiently:5 likewise:1 identify:3 ensemble:1 correspond:1 characteristic:1 generalize:1 bayesian:13 biostat:1 trajectory:20 biomedicine:1 against:1 acquisition:1 pp:4 mi:1 gain:1 xx0:2 sampled:4 ncats:1 recall:1 knowledge:1 organized:1 graepel:1 ta:1 day:1 wei:1 formulation:1 furthermore:1 cohn:1 propagation:2 widespread:1 perhaps:1 grows:3 usa:3 effect:2 building:1 requiring:1 true:15 concept:1 entering:2 memoryless:1 ll:2 criterion:3 generalized:1 syntax:1 pdf:2 complete:4 theoretic:1 reasoning:4 variational:1 nih:1 functional:2 quarter:1 qp:22 exponentially:5 significant:4 glucose:3 cv:7 ai:2 mathematics:1 similarly:2 pathology:1 had:1 gratefully:1 similarity:1 showed:2 female:1 belongs:3 store:1 hay:1 discretizing:1 binary:3 minimum:1 greater:2 additional:2 determine:2 maximize:2 period:1 aggregated:1 shortest:1 branch:1 full:2 match:1 youth:2 calculation:1 offer:1 paired:1 prediction:1 scalable:3 regression:8 expectation:3 poisson:4 iteration:2 represent:1 penalize:1 proposal:2 background:2 whereas:1 receive:1 interval:8 desicion:1 wake:1 grow:3 leaving:1 rest:1 unlike:1 suspect:1 spirit:1 call:4 chest:1 structural:2 granularity:1 exceed:1 split:14 easy:2 independence:2 affect:1 restrict:1 malley:1 reduce:1 parameterizes:1 tradeoff:1 absent:1 bottleneck:1 expression:1 motivated:1 forecasting:1 retrospective:1 penalty:1 cif:4 cause:3 useful:1 tune:2 amount:1 repeating:1 induces:3 cit:7 generate:1 reframes:1 outperform:2 exist:1 schapire:1 canonical:1 disjoint:2 per:13 track:1 discrete:3 kupferman:1 vol:5 four:1 blood:3 wisc:2 neither:1 saeedi:1 graph:5 sum:1 run:7 parameterized:1 cits:4 uncertainty:1 powerful:1 named:1 place:1 family:1 arrive:1 scaling:1 abnormal:1 resampled:1 ct:1 meek:1 copied:1 fan:2 winston:1 bp:1 x2:2 s100:7 performing:1 relatively:2 structured:1 developing:2 according:3 department:1 combination:1 across:1 terminates:1 slightly:1 heckerman:1 wi:2 modification:10 sided:1 heart:1 equation:4 remains:1 previously:1 discus:4 count:1 needed:1 mind:1 ctbns:46 end:1 available:1 observe:3 occurrence:1 alternative:2 shortly:1 original:1 recipient:2 binomial:1 remaining:1 include:2 trouble:1 completed:1 graphical:1 bagging:1 madison:2 calculating:2 restrictive:1 surjective:1 unchanged:1 move:1 objective:1 added:1 strobl:1 parametric:1 primary:1 traditional:1 gradient:3 pain:1 mx:2 unable:1 trivial:1 induction:1 ctmps:2 code:2 index:1 relationship:1 providing:1 ratio:2 pvj:3 nc:1 ql:1 regulation:1 negative:3 implementation:1 perform:2 teh:1 observation:4 markov:2 benchmark:1 finite:2 arc:1 acknowledge:1 t:3 subsume:1 extended:1 defining:1 arbitrary:1 intensity:49 david:1 introduced:1 pair:2 smoking:1 specified:2 connection:2 engine:1 california:1 learned:2 expressivity:1 established:1 timeline:2 hour:2 qa:1 address:2 able:2 suggested:1 nip:2 below:1 challenge:1 tb:1 program:1 built:1 including:2 deleting:1 greatest:1 event:7 suitable:1 natural:1 difficulty:1 cibm:1 indicator:1 atherosclerosis:3 mp0:2 cim:1 ready:1 health:9 tutz:1 prior:4 ep0:2 determining:1 wisconsin:2 nodelman:8 freund:1 rationale:1 limitation:2 age:2 sufficient:9 s0:2 storing:1 share:1 penalized:1 allow:2 taking:4 sparse:1 calculated:1 xn:1 transition:30 ending:1 valid:1 depth:1 made:1 collection:2 exponentiating:1 simplified:1 jump:1 qx:10 approximate:2 compact:3 keep:1 gene:1 ml:3 overfitting:1 incoming:1 instantiation:2 uai:6 summing:1 continuous:24 iterative:2 search:1 additionally:1 learn:8 terminate:2 rearranging:1 forest:56 necessarily:2 complex:1 constructing:1 domain:5 vj:4 did:2 dense:1 fx0:8 linearly:2 arrow:1 s2:1 bmi:2 terminated:1 x1:3 xu:2 exponential:7 jmlr:1 third:5 learns:1 removing:1 transitioning:6 specific:1 itive:1 showing:1 list:1 evidence:2 kanazawa:1 false:15 adding:4 effectively:3 importance:1 phd:2 magnitude:1 smoker:3 nigms:1 intersection:1 electrophysiology:1 simply:1 jama:1 failed:1 prevents:1 ux:8 simma:1 gender:1 corresponds:1 truth:6 determines:3 pcims:1 ma:1 conditional:7 viewed:1 goal:1 towards:1 twofold:1 replace:1 saria:1 change:14 specifically:1 except:1 determined:3 reversing:2 averaging:1 acting:1 total:2 called:2 experimental:1 indicating:1 formally:1 internal:1 fgb:1 evaluate:2 outgoing:2 tested:1 shelton:4 |
3,874 | 4,507 | Dual-Space Analysis of the Sparse Linear Model
David Wipf and Yi Wu
Visual Computing Group, Microsoft Research Asia
[email protected], [email protected]
Abstract
Sparse linear (or generalized linear) models combine a standard likelihood function with a sparse prior on the unknown coefficients. These priors can conveniently be expressed as a maximization over zero-mean Gaussians with different
variance hyperparameters. Standard MAP estimation (Type I) involves maximizing over both the hyperparameters and coefficients, while an empirical Bayesian
alternative (Type II) first marginalizes the coefficients and then maximizes over
the hyperparameters, leading to a tractable posterior approximation. The underlying cost functions can be related via a dual-space framework from [22], which
allows both the Type I or Type II objectives to be expressed in either coefficient
or hyperparmeter space. This perspective is useful because some analyses or extensions are more conducive to development in one space or the other. Herein we
consider the estimation of a trade-off parameter balancing sparsity and data fit. As
this parameter is effectively a variance, natural estimators exist by assessing the
problem in hyperparameter (variance) space, transitioning natural ideas from Type
II to solve what is much less intuitive for Type I. In contrast, for analyses of update
rules and sparsity properties of local and global solutions, as well as extensions to
more general likelihood models, we can leverage coefficient-space techniques developed for Type I and apply them to Type II. For example, this allows us to prove
that Type II-inspired techniques can be successful recovering sparse coefficients
when unfavorable restricted isometry properties (RIP) lead to failure of popular
?1 reconstructions. It also facilitates the analysis of Type II when non-Gaussian
likelihood models lead to intractable integrations.
1
Introduction
We begin with the likelihood model
y = ?x + ?,
(1)
where ? ? Rn?m is a dictionary of unit ?2 -norm basis vectors, x ? Rm is a vector of unknown
coefficients we would like to estimate, y ? Rn is the observed signal, and ? is noise distributed as
N (?; 0, ?I) (later we consider more general likelihood models). In many practical situations where
large numbers of features are present relative to the signal dimension, the problem of estimating x
given y becomes ill-posed. A Bayesian framework is intuitively appealing for formulating these
types of problems because prior assumptions must be incorporated, whether explicitly or implicitly,
to regularize the solution space.
Recently, there has been a growing interest in models that employ sparse priors p(x) to encourage
solutions x with mostly small or zero-valued coefficients and a few large or unrestricted values, i.e.,
we are assuming the generative x is a sparse vector. Such solutions can be favored by using
Y
Y
1
1
p(x) ?
exp ? g(xi ) =
(2)
exp ? h x2i ,
2
2
i
i
with h concave and non-decreasing on [0, ?) [15, 16]. Virtually all sparse priors of interest can
be expressed in this manner, including the popular Laplacian, Jeffreys, Student?s t, and generalized
1
Gaussian distributions. Roughly speaking, the ?more concave? h, the more sparse we expect x to ?
be.
For example, with h(z) = z, we recover a Gaussian, which is not sparse at all, while h(z) = z
gives a Laplacian distribution, with characteristic heavy tails and a sharp peak at zero.
All sparse priors of the form (2) can be conveniently framed in terms of a collection of non-negative
latent variables or hyperparameters ? , [?1 , . . . , ?m ]T for purposes of optimization, approximation,
and/or inference. The hyperparameters dictate the structure of the prior via
Y
p(x) =
p(xi ), p(xi ) = max N (xi ; 0, ?i )?(?i ),
(3)
?i ?0
i
where ?(?i ) is some non-negative function that is sometimes treated as a hyperprior, although it will
not generally integrate to one. For the purpose of obtaining sparse point estimates of x, which will
be our primary focus herein, models with latent variable sparse priors are frequently handled in one
of two ways. First, the latent structure afforded by (3) offers a very convenient means of obtaining
(possibly local) maximum a posteriori (MAP) estimates of x by iteratively solving
X x2
i
2
x(I) = arg min ? log p(y|x)p(x) = arg min ky ? ?xk2 + ?
+ log ?i + f (?i ) , (4)
x
x;?0
?i
i
where f (?i ) , ?2 log ?(?i ) and x(I) is commonly referred to as a Type I estimator. Examples
include minimum ?p -norm approaches [4, 11, 16], Jeffreys prior-based methods sometimes called
FOCUSS [7, 6, 9], algorithms for computing the basis pursuit (BP) or Lasso solution [6, 16, 18],
and iterative reweighted ?1 methods [3].
Secondly, instead of maximizing over both x and ? as in (4), Type II methods first integrate out
(marginalize) the unknown x and then solve the empirical Bayesian problem [19]
Z
Y
N (x; 0, ?i )?(?i )dxi
?(II) = arg max p(?|y) = arg max p(y|x)
?
=
?
arg min y T ??1
y y + log |?y | +
?
i
m
X
f (?i ),
(5)
i=1
where ?y , ?I + ???T and ? , diag[?]. Once ?(II) is obtained, the conditional distribution
p(x|y; ?(II) ) is Gaussian, and a point estimate for x naturally emerges as the posterior mean
?1
x(II) = E x|y; ?(II) = ?(II) ?T ?I + ??(II) ?T
y.
(6)
Pertinent examples include sparse Bayesian learning and the relevance vector machine (RVM) [19],
automatic relevance determination (ARD) [14], methods for learning overcomplete dictionaries [8],
and large-scale experimental design [17].
While initially these two approaches may seem vastly different, both can be directly compared using
a dual-space view [22] of the underlying cost functions. In brief, this involves expressing both the
Type I and Type II objective solely in terms of either x or ? as reviewed in Section 2. The dual-space
view is advantageous for several reasons, such as establishing connections between algorithms, developing efficient update rules, or handling more general (non-Gaussian) likelihood functions. In
Section 3, we utilize ?-space cost functions to develop a principled method for choosing the tradeoff parameter ? (which accompanies the Gaussian likelihood model and essentially balances sparsity
and data fit) and demonstrate its effectiveness via simulations. Section 4 then derives a new Type
II-inspired algorithm in x-space that can compute maximally sparse (minimal ?0 norm) solutions
even with highly coherent dictionaries, proving a result for clustered dictionaries that previously has
only been shown empirically [21]. Finally, Section 5 leverages duality to address Type II methods
with generalized likelihood functions that previously were rendered untenable because of intractable
integrals. In general, some tasks and analyses are easier to undertake in ?-space (Section 3), while
others are more transparent in x-space (Sections 4 and 5). Here we consider both with the goal of
advancing the proper understanding and full utilization of the sparse linear model.
2
Dual-Space View of the Sparse Linear Model
Type I is based on a natural cost function in x-space, p(x|y), while Type II involves an analogous
function in ?-space, p(?|y). The dual-space view defines a corresponding ?-space cost function for
Type I and a x-space cost function for Type II to complete the symmetry.
2
Type II in x-Space: Using the relationship
1
2
T ?1
x
y??1
y y = min ky ? ?xk2 + x ?
x ?
(7)
as in [22], it can be shown that the Type II coefficients from (6) satisfy x(II) = arg minx L(II) (x),
where
L(II) (x) , ky ? ?xk22 + ?g(II) (x),
(8)
and
g(II) (x) , min
?0
X x2
i
i
?i
+ log |?y | +
X
f (?i ).
(9)
i
This reformulation of Type II in x-space is revealing for multiple reasons (Sections 4 and 5 will
address additional reasons in detail). For many applications of the sparse linear model, the primary
?
goal is simply a point estimate that exhibits some degree of sparsity, meaning many elements of x
near zero and a few relatively large coefficients. This requires a penalty function g(x) that is concave
and non-decreasing in x2 , [x21 , . . . , x2m ]T . In the context of Type I, any prior p(x) expressible via
(2) will satisfy this condition by definition; such priors are said to be strongly super-Gaussian and
will always have positive kurtosis [15]. Regarding Type II, because the associated x-space penalty
(9) is represented as a minimum of upper-bounding hyperplanes with respect to x2 (and the slopes
are all non-negative given ? 0), it must therefore be concave and non-decreasing in x2 [1].
For compression, interpretability, or other practical reasons, it is sometimes desirable to have exactly
sparse point estimates, with many (or most) elements of x equal to exactly zero. This then necessitates a penalty function g(x) that is concave and non-decreasing in |x| , [|x1 |, . . . , |xm |]T , a much
stronger P
condition. In the case of Type I, if log ? + f (?) is concave and non-decreasing in ?, then
g(x) = i g(xi ) satisfies this condition. The Type II analog, which emerges by further inspection
of (9) stipulates that if
X
X
log |?y | +
f (?i ) = log ??1 ?T ? + ??1 + log |?| +
f (?i )
(10)
i
i
is a concave and non-decreasing function of ?, then g(II) (x) will be a concave, non-decreasing
function of |x|. For this purpose it is sufficient, but not necessary, that f be a concave and nondecreasing function. Note that this is a somewhat stronger criteria than Type I since the first term
on the righthand side of (10) (which is absent from Type I) is actually convex in ?. Regardless, it is
now very transparent how Type II may promote sparsity akin to Type I.
The dual-space view also leads to efficient, convergent algorithms such as iterative reweighted ?1
minimization and its variants as discussed in [22]. However, building on these ideas, we can demonstrate here that it also elucidates the original, widely applied update procedures developed for implementing the relevance vector machine (RVM), a popular Type II method for regression and classification that assumes f (?) = 0 [19]. In fact these updates, which were inspired by a fixed-point
heuristic from [12], have been widely used for a number of Bayesian inference tasks without any
formal analyses or justification.1 The dual-space formulation can be leveraged to show that these
updates are in fact executing a coordinate-wise, iterative min-max procedure in search of a saddle
point. Specifically we have the following result (all proofs are in the supplementary material):
Theorem 1. The original RVM update rule from [19, Equation (16)] is equivalent to a closed-form,
coordinate-wise optimization of
"
#
X x2
i
2
min max ky ? ?xk2 +
(11)
+ zi log ?i ? ?(z)
x;?0 z0
?i
i
over x, ?, and z, where ?(z) is the convex conjugate function [1] of log ?I + ?diag[exp(u)]?T
with respect to u.
1
Although a more recent, step-wise variant of the RVM has been shown to be substantially faster [20],
the original version is still germane since it can easily be extended to handle more general structured sparsity
problems. The step-wise method cannot without introducing additional approximations [10].
3
Type I in ?-Space: Similar methodology and the expansion of y T ??1
y y can be used to express
the Type I optimization problem in ?-space, which serves several useful purposes. Let ?(I) ,
arg min?0 L(I) (?), with
L(I) (?) , y T ??1
y y + log |?| +
m
X
f (?i ).
(12)
y.
(13)
i=1
Then the Type I coefficients obtained from (4) satisfy
x(I) = ?(I) ?T ?I + ??(I) ?T
?1
Section 3 will use ?-space cost functions to derive well-motivated approaches for learning the tradeoff parameter ?.
3
Choosing the Trade-off Parameter ?
The trade-off parameter is crucial for obtaining good estimates of x. In general, if ? is too large,
? ? 0; too small and x
? is overfitted to the noise. In practice, either expensive cross-validation or
x
some heuristic procedure is often required. However, because ? can be interpreted as a variance, it is
useful to address its estimation in ?-space, in which existing unknowns (i.e., ?) are also variances.
Learning ? with Type I: Consider the Type I cost function L(I) (?). The data-dependent term can be
shown to be a convex, non-increasing function of ?, which encourages each element to be large. The
second term is a penalty factor that regulates the size of ?. It is here that a convenient regularizer
for ? can be incorporated.
Pm
Pn
This can be accomplished as follows. First we expand ?y via ?y = j=1 ?i ??i ?T?i + j=1 ?ej eTj ,
where ??i denotes the i-th column of ? and ej is a column vector of zeros with a ?1? in the j-th
location. Thus we observe that ? is embedded in the data-dependent term in the exact same fashion
as each ?i . This motivates a penalty on ? with similar correspondence, leading to the objective
L(I) (?, ?)
,
=
y T ??1
y y+
y T ??1
y y+
m
X
i=1
m
X
[log ?i + f (?i )] +
n
X
[log ? + f (?)]
j=1
[log ?i + f (?i )] + n log ? + nf (?).
(14)
i=1
While admittedly simple, this construction is appealing because, regardless of how each ?i is penalized, ? is penalized in a proportional manner, so both ? and ? have a properly balanced chance of
explaining the observed data. This is important because the optimal ? will be highly dependent on
both the true noise level, and crucially, the particular sparse prior assumed p(x) (as reflected by f ).
For analysis or implementational purposes, we may convert L(I) (?, ?) back to x-space, with ?dependency now removed. It can then be shown that solving (4), with ? fixed to the value that
minimizes (14), is equivalent to solving
X
1
min
s.t. y = ?x + u.
(15)
g(xi ) + ng ? kuk2 ,
x,u
n
i
If x? and u? minimize (15), then we can demonstrate using [15] that the corresponding ? estimate,
which also minimizes (14), is given by ?? = ?h(z)/?z evaluated at z = 1/nku? k22 . Note that if we
were just performing maximum likelihood estimation of ? given x? , the optimal value would reduce
to simply ?? = 1/nku? k22 , with no influence from the prior on x. This is a fundamental weakness.
Solving (15), or equivalently (14), can be accomplished using simple iterative reweighted least
squares, or if g is concave in |xi |, an iterative reweighted second-order-cone (SOC) minimization.
Learning ? with Type II: The same procedure can be adopted for Type II yielding the cost function
X
L(II) (?, ?) = y T ??1
f (?i ) + nf (?),
(16)
y y + log |?y | +
i
4
where we note that, unlike in the Type I case above, the log-based term is already naturally balanced
between ? and ? by virtue of the symmetric embedding in ?y . It is important to stress that this
Type II prescription for learning ? is not the same as originally proposed in the literature for Type
II models of this genre. In this context, ?(?i ) is interpreted a hyperprior on ?i , and an equivalent
distribution is assumed on the noise variance ?. Importantly, these assumptions leave out the factor
of n in (16), and so an asymmetry is created.
Simulation Examples: Empirical tests help to illustrate the efficacy of this procedure. As in many
applications of sparse reconstruction, here we are only concerned with accurately estimating x,
whose nonzero entries may have physical significance (e.g., source localization [16], compressive
sensing [2], etc.), as opposed to predicting new values of y. Therefore, automatically learning the
value of ? is particularly relevant, since cross-validation is often not possible.2 Simulations are
helpful for evaluation purposes since we then have access to the true sparse generating vector.
Figure 1 compares the estimationP
performance obtained by minimizing (15) with two different selections for g: g(x) = kxkpp = i |xi |p , with p = 0.01 and p = 1.0. Data generation proceeds
as follows: We create a random 100 ? 50 dictionary ?, with ?2 -normalized, iid Gaussian columns.
x is randomly generated with 10 unit Gaussian nonzero elements. We then compute y = ?x + ?,
where ? is iid Gaussian noise producing an SNR of 0dB. To determine what ? values lead to optimal
performance we solve (4) with the appropriate g over a range of fixed ? values (10?4 to 101 ) and
? The minimum of this curve reflects the best performance
then compute the error between x and x.
we can hope to achieve when learning ? blindly. In Figure 1 (Top) we plot these curves for both
Type I methods averaged over 1000 independent trials.
Next we solve (15), which produces an estimate of both x and ?. We mark with an ?+? the learned
? In both cases the learned ??s (averaged across trials) perform
? versus the corresponding error of x.
just as well as if we knew the optimal value a priori. Results using other noise levels, problem dimensions n and m, sparsity levels kxk0 , and sparsity penalties g are similar. See the supplementary
material for more examples.
? as quantified by the ?0 norm kxk
? 0,
Figure 1 (Bottom) shows the average sparsity of estimates x,
across ? values (kxk0 returns a count of the number of nonzero elements in x). The ?+? indicates
? for the learned ? as before. In general, the ?(0.01) penalty produces
the average sparsity of each x
a much sparser estimate, very near the true value of kxk0 = 10 at the optimal ?. The ?1 penalty,
which is substantially less concave/sparsity-inducing, still sets some elements to exactly zero, but
also substantially shrinks nonzero coefficients in achieving a similar overall reconstruction error.
This highlights the importance of learning a ? via a penalty that is properly matched to the prior on
x: if we instead tried to force a particular sparsity value (in this case 10), then the ?1 solution would
be very suboptimal. Finally we note that maximum likelihood (ML) estimation of ? performs very
poorly (not shown), except in the special case where the ML estimate is equivalent to solving (14)
as occurs when f (?) = 0 (see [6]). The proposed method can be viewed as adding a principled
hyperprior on ?, properly matched to p(x), that compensates for this shortcoming of standard ML.
Type II ? estimation has been explored elsewhere for the special case where f (?) = 0 [19], which
renders the factor of n in (16) irrelevant; however, for other selections we have found this factor
to improve performance (not shown). For space considerations we have focused our attention here
on Type I, which has frequently been noted for not lending itself well to ? estimation (or related
parameters) [6, 13]. In fact, the symmetry afforded by the dual-space perspective reveals that Type
I is just as natural a candidate for this task as Type II, and may be preferred in high-dimensional
settings where computational resources are at a premium.
4
Maximally Sparse Estimation
With the advent of compressive sensing and other related applications, there has been growing interest in finding maximally sparse signal representations from redundant dictionaries (m ? n) [3, 5].
The canonical form of this problem involves solving
x0 , arg min kxk0 ,
s.t. y = ?x.
(17)
x
2
For example, in non-stationary environments, the value of both x and ? may be completely different for
any new y, which then necessitates that we estimate both jointly.
5
50
45
0.8
40
0.7
35
0.6
30
? 0
kxk
MSE
1
0.9
0.5
0.4
25
0.3
?(0.01)
0.2
?1
20
15
10
0.1
0 ?4
10
?(0.01)
?1
5
?3
10
?2
10
?1
10
0
10
0 ?4
10
1
10
?3
10
?2
10
?1
10
0
10
1
10
? value
? value
? 22 /kxk2 (where the
Figure 1: Left: Normalized mean-squared error (MSE) given by kx ? xk
average is across 1000 trials) plotted versus ? for two different Type I approaches. Each black ?+?
represents the estimated value of ? (averaged across trials) and the associated MSE produced with
this estimate. In both cases the estimated value achieves the lowest possible MSE (it can actually
be slightly lower than the curve because its value is allowed to fluctuate from trial to trial). Right:
? 0 versus ?. Even though they both lead to similar MSE, the ?(0.01) penalty
Solution sparsity kxk
produces a much sparser estimate at the optimal ? value.
While (17) is NP-hard, whenever the dictionary ? satisfies a restricted isometry property (RIP) [2]
or a related structural assumption, meaning that each kx0 k0 columns of ? are sufficiently close
to orthonormal (i.e., mutually uncorrelated), then replacing ?0 with ?1 in (17) leads to a convex
problem with an equivalent global solution. Unfortunately however, in many situations (e.g., feature
selection, source localization) these RIP equivalence conditions are grossly violated, implying that
the ?1 solution may deviate substantially from x0 .
An alternative is to instead replace (17) with minimization of (8) and then take the limit as ? ? 0.
(Note that the extension to the noisy case with ? > 0 is straightforward, but analysis is more
difficult.) In this regime the optimization problem reduces to
x(II) = lim arg min g(II) (x),
??0
s.t. y = ?x.
(18)
x
P
If log |?y | + i f (?i ) is concave, then (18) can be minimized using reweighted ?1 minimization.
With initial weight vector w(0) = 1, the (k + 1)-th iteration involves computing
X (k)
?g(II) (x)
w(k+1) ?
wi |xi |,
x(k+1) ? arg min
.
(19)
x: y=?x
?|xi |
(k+1)
x=x
i
With f (?) = 0, iterating (19) will provably lead to an estimate of x0 that is as good or better than the
?1 solution [21], in particular when ? has highly correlated columns. Additionally, the assumption
f (?) = 0 leads to a closed-form expression for the weights w(k+1) . Let
?1 q
T
(k+1) 2 T
(20)
?i (x; ?, q) , ??i ?I + ?|X
| ?
??i ,
(k+1)
where |X (k+1 | denotes a diagonal matrix with i-th diagonal entry given by |xi
|. Then w(k+1)
(k+1)
can be computed via wi
= ?i (x; 0, 1/2), ?i. It remains unclear however in what circumstances this type of update can lead to guaranteed improvement nor if the functions ?i (x; 0, 1/2)
are even the optimal choice. We will now demonstrate that for certain selections of ? and q, we
can guarantee that reweighted ?1 using ?i (x; ?, q) is guaranteed to recover x0 exactly if ? is drawn
from what we call a clustered dictionary model.
(d)
Definition 1. Clustered Dictionary Model: Let ?uncorr denote any dictionary such that ?1 mini(d,?)
mization succeeds in solving (17) for all kx0 k0 ? d. Let ?corr denote any dictionary obtained
(d)
by replacing each column of ?uncorr with a ?cluster? of mi basis vectors such that the angle between any two vectors within a cluster is less than some ? > 0. We also define the cluster support
6
?0 ? {1, 2, . . . , m} as the set of cluster indices whereby x0 has at least one nonzero element.
(d,?)
Finally, we assume that the resulting ?corr is such that every n ? n submatrix is full rank.
(d,?)
Theorem 2. For any sparse vector x0 and any dictionary ?corr obtained from the clustered
dictionary model with ? sufficiently small, reweighted ?1 minimization using weights ?i (x; ?, q)
with
P some q ? 1 and ? sufficiently small will recover x0 exactly provided that |?0 | ? d,
i??0 mi ? n, and within each cluster k ? ?0 the coefficients do not sum to zero.
Theorem 2 implies that even though ?1 may fail to find the maximally sparse x0 because of severe
RIP violations (high correlations between groups of dictionary columns as dictated by ? lead directly
to a poor RIP), a Type II-inspired method can still be successful. Moreover, because whenever ?1
does succeed, Type II will always succeed as well (assuming a reweighted ?1 implementation), the
converse (RIP violation leading to Type II failure but not ?1 failure) can never happen. Recent work
from [21] has argued that Type II may be useful for addressing the sparse recovery problem with
correlated dictionaries, and empirical evidence is provided showing vastly superior performance on
clustered dictionaries. However, we stress that no results proving global convergence to the correct,
maximally sparse solution have been shown before in the case of structured dictionaries (except
in special cases with strong, unverifiable constraints on coefficient magnitudes [21]). Moreover,
the proposed weighting strategy ?i (x; ?, q) accomplishes this without any particular tuning to the
clustered dictionary model under consideration and thus likely holds in many other cases as well.
5
Generalized Likelihood functions
Type I methods naturally accommodate alternative likelihood functions. We simply must replace the
quadratic data fit term from (4) with some preferred function and then coordinate-wise optimization
may proceed provided we have an efficient means of computing a weighted ?2 -norm penalized
solution. In contrast, generalizing Type II is substantially more complicated because it is no longer
possible to compute the marginalization (5) or the posterior distribution p(x|y; ?(II) ). Therefore, to
obtain a tractable estimate x(II) additional heuristics are required. For example, the RVM classifier
from [19] employs a Laplace approximation for this purpose; however, it is not clear what cost
function is being minimized nor rigorous properties of the estimated solutions.
Fortunately, the dual x-space view provides a natural mechanism for generalizing the basic Type II
methodology to address alternative likelihood functions in a more principled manner. In the case
of classification problems, we might want to replace the Gaussian likelihood p(y|x) implied by (1)
with a multivariate Bernoulli distribution p(y|x) ? log[??(y, x)] where ?(y, x) is the function
X
? (y, x) ,
(yj log [?j (x)] + (1 ? yj ) log [1 ? ?j (x)]) .
(21)
j
Here yj ? {0, 1} and ?j (x) , 1/[1+exp(?Tj? x)], with ?j? denoting the j-th row of ?. This function
may be naturally substituted into the x-space Type II cost function (8) giving us the candidate
penalized logistic regression function
min ? (y, x) + ?g(II) (x).
(22)
x
Importantly, recasting Type II classification using x-space in this way, with its attendant wellspecified cost function, facilitates more concrete analyses (see below) regarding properties of global
and local minima that were previously rendered inaccessible because of intractable integrals and
compensatory approximations. Moreover, we retain a tight connection with the original Type II
marginalization process as follows.
Consider the strict upper bound on the function ?(y, x) (obtained by a Taylor series approximation
and a Hessian bound) given by
T
T
?(y, x) ? ?(y, x, v) , ?(y, v) + (v ? x) ?T t + 1/8 (v ? x) ?T ? (v ? x) ,
(23)
where t = [t1 , . . . , tn ] with tj , yj ? ?j (v). This bound holds for all v with equality
when
v = x. Using this result
we obtain the lower bound on the marginal likelihood given by
R
R
log[??(y, x)]p(x)dx ? log[??(y, x, v)]p(x)dx. The dual-space framework can then be used
to derive the following result:
T
7
Theorem 3. Minimization of (22) with ? = 4 is equivalent to solving
Z
Y
max
exp [??(y, x, v)]
N (x; 0, ?i )?(?i )dxi
v;?0
(24)
i
and then computing x(II) by plugging the resulting ? into (6).
Thus we may conclude that (22) provides a principled approximation to (5) when a Bernoulli likelihood function is used for classification purposes. In empirical tests on benchmark data sets (see
supplementary material) using f (?) = 0, it performs nearly identically to the original RVM (which
also implicitly assumes f (?) = 0), but nonetheless provides a more solid theoretical justification
for Type II classifiers because of the underlying similarities and identical generative model. But
while the RVM and its attendant approximations are difficult to analyze, (22) is relatively transparent. Additionally, for other sparse priors, or equivalently other selections for f , we can still perform
optimization and analyze cost functions without any conjugacy requirements on the implicit p(x).
P
Theorem 4. If log |?y | + i f (?i ) is a concave, non-decreasing function of ? (as will be the case
if f is concave and non-decreasing), then every local optimum of (24) is achieved at a solution with
at most n nonzero elements in ? and therefore x(II) . In contrast, if ? log p(x) is convex, then (24)
can be globally solved via a convex program.
Despite the practical success of the RVM and related Bayesian techniques, and empirical evidence of
sparse solutions, there is currently no proof that the standard variants of these classification methods
will always produce exactly sparse estimates. Thus Theorem 4 provides some analytical validation
of these types of classifiers.
Finally, if we take (22) as our starting point, we may naturally consider modifications tailored to
specific sparse classification tasks (that may or may not retain an explicit connection with the original
Type II probabilistic model). For example, suppose we would like to obtain a maximally sparse
classifier, where regularization is provided by a kxk0 penalty. Direct optimization is combinatorial
because of what we call the global zero attraction property: Whenever any individual coefficient xi
goes to zero, we are necessarily at a local minimum with respect to this coefficient because of the
infinite slope (discontinuity) of the ?0 norm at zero. However, (22) can be modified to approximate
the ?0 without this property as follows.
Theorem 5. Consider the Type II-inspired minimization problem
X x2
i
? ?
? = arg min ? (y, x) + ?1
x,
+ log ?2 I + ???T
x;?0
?i
i
(25)
which is equivalent to (22) with f (?) = 0 when ?1 = ?2 = ?. For some ?1 and ?2 sufficiently
? will match the support of arg minx ? (y, x) +
small (but not necessarily equal), the support3 of x
?kxk0 . Moreover, (25) does not satisfy the global zero attraction property.
Thus Type II affords the possibility of mimicking the ?0 norm in the presence of generalized likelihoods but with the advantageous potential for drastically fewer local minima. This is a direction
for future research. Additionally, while here we have focused our attention on classification via
logistic regression, these ideas can presumably be extended to other likelihood functions provided
certain conditions are met. To the best of our knowledge, while already demonstrably successful
in an empirical setting, Type II classifiers and other related Bayesian generalized likelihood models
have never been analyzed in the context of sparse estimation as we have done in this section.
6
Conclusion
The dual-space view of sparse linear or generalized linear models naturally allows us to transition
x-space ideas originally developed for Type I and apply them to Type II, and conversely, apply ?space techniques from Type II to Type I. The resulting symmetry promotes a mutual understanding
of both methodologies and helps ensure that they are not underutilized.
3
Support refers to the index set of the nonzero elements.
8
References
[1] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
[2] E. Cand`es, J. Romberg, and T. Tao, ?Robust uncertainty principles: Exact signal reconstruction
from highly incomplete frequency information,? IEEE Trans. Information Theory, vol. 52, no.
2, pp. 489?509, Feb. 2006.
[3] E. Cand`es, M. Wakin, and S. Boyd, ?Enhancing sparsity by reweighted ?1 minimization,? J.
Fourier Anal. Appl., vol. 14, no. 5, pp. 877?905, 2008.
[4] R. Chartrand and W. Yin, ?Iteratively reweighted algorithms for compressive sensing,? Proc.
Int. Conf. Accoustics, Speech, and Signal Proc., 2008.
[5] D.L. Donoho and M. Elad, ?Optimally sparse representation in general (nonorthogonal) dictionaries via ?1 minimization,? Proc. National Academy of Sciences, vol. 100, no. 5, pp. 2197?
2202, March 2003.
[6] M.A.T. Figueiredo, ?Adaptive sparseness using Jeffreys prior,? Advances in Neural Information Processing Systems 14, pp. 697?704, 2002.
[7] C. F?evotte and S.J. Godsill, ?Blind separation of sparse sources using Jeffreys inverse prior and
the EM algorithm,? Proc. 6th Int. Conf. Independent Component Analysis and Blind Source
Separation, Mar. 2006.
[8] M. Girolami, ?A variational method for learning sparse and overcomplete representations,?
Neural Computation, vol. 13, no. 11, pp. 2517?2532, 2001.
[9] I.F. Gorodnitsky and B.D. Rao, ?Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm,? IEEE Transactions on Signal Processing,
vol. 45, no. 3, pp. 600?616, March 1997.
[10] S. Ji, D. Dunson, and L. Carin, ?Multi-task compressive sensing,? IEEE Trans. Signal Processing, vol. 57, no. 1, pp. 92?106, Jan 2009.
[11] K. Kreutz-Delgado, J. F. Murray, B.D. Rao, K. Engan, T.-W. Lee, and T.J. Sejnowski, ?Dictionary learning algorithms for sparse representation,? Neural Computation, vol. 15, no. 2, pp.
349?396, February 2003.
[12] D.J.C. MacKay, ?Bayesian interpolation,? Neural Computation, vol. 4, no. 3, pp. 415?447,
1992.
[13] J. Mattout, C. Phillips, W.D. Penny, M.D. Rugg, and K.J. Friston, ?MEG source localization
under multiple constraints: An extended Bayesian framework,? NeuroImage, vol. 30, pp. 753?
767, 2006.
[14] R.M. Neal, Bayesian Learning for Neural Networks, Springer-Verlag, New York, 1996.
[15] J.A. Palmer, D.P. Wipf, K. Kreutz-Delgado, and B.D. Rao, ?Variational EM algorithms for
non-Gaussian latent variable models,? Advances in Neural Information Processing Systems
18, pp. 1059?1066, 2006.
[16] B.D. Rao, K. Engan, S. F. Cotter, J. Palmer, and K. Kreutz-Delgado, ?Subset selection in noise
based on diversity measure minimization,? IEEE Trans. Signal Processing, vol. 51, no. 3, pp.
760?770, March 2003.
[17] M. Seeger and H. Nickisch, ?Large scale Bayesian inference and experimental design for
sparse linear models,? SIAM J. Imaging Sciences, vol. 4, no. 1, pp. 166?199, 2011.
[18] R. Tibshirani, ?Regression shrinkage and selection via the Lasso,? Journal of the Royal
Statistical Society, vol. 58, no. 1, pp. 267?288, 1996.
[19] M.E. Tipping, ?Sparse Bayesian learning and the relevance vector machine,? Journal of
Machine Learning Research, vol. 1, pp. 211?244, 2001.
[20] M.E. Tipping and A.C. Faul, ?Fast marginal likelihood maximisation for sparse Bayesian
models,? Ninth Int. Workshop. Artificial Intelligence and Statistics, Jan. 2003.
[21] D.P. Wipf ?Sparse estimation with structured dictionaries,? Advances in Nerual Information
Processing 24, 2011.
[22] D.P. Wipf, B.D. Rao, and S. Nagarajan, ?Latent variable Bayesian models for promoting
sparsity,? IEEE Trans. Information Theory, vol. 57, no. 9, Sept. 2011.
9
| 4507 |@word trial:6 version:1 compression:1 norm:8 advantageous:2 stronger:2 simulation:3 crucially:1 tried:1 solid:1 accommodate:1 delgado:3 initial:1 series:1 efficacy:1 denoting:1 existing:1 kx0:2 com:2 gmail:2 dx:2 must:3 recasting:1 happen:1 pertinent:1 plot:1 update:7 stationary:1 generative:2 implying:1 fewer:1 intelligence:1 inspection:1 xk:1 provides:4 lending:1 location:1 hyperplanes:1 direct:1 prove:1 combine:1 manner:3 x0:8 roughly:1 cand:2 frequently:2 growing:2 nor:2 multi:1 inspired:5 globally:1 decreasing:9 automatically:1 increasing:1 becomes:1 begin:1 estimating:2 underlying:3 matched:2 maximizes:1 provided:5 advent:1 lowest:1 what:6 moreover:4 interpreted:2 substantially:5 minimizes:2 developed:3 compressive:4 finding:1 guarantee:1 every:2 nf:2 concave:14 exactly:6 rm:1 classifier:5 utilization:1 unit:2 converse:1 producing:1 positive:1 before:2 t1:1 local:6 limit:1 despite:1 establishing:1 solely:1 interpolation:1 black:1 might:1 quantified:1 equivalence:1 conversely:1 appl:1 limited:1 palmer:2 range:1 averaged:3 practical:3 yj:4 practice:1 maximisation:1 procedure:5 jan:2 empirical:7 nku:2 dictate:1 convenient:2 revealing:1 boyd:2 refers:1 cannot:1 marginalize:1 selection:7 close:1 romberg:1 context:3 influence:1 equivalent:7 map:2 maximizing:2 straightforward:1 regardless:2 attention:2 starting:1 convex:7 focused:2 go:1 recovery:1 estimator:2 rule:3 attraction:2 importantly:2 orthonormal:1 regularize:1 vandenberghe:1 proving:2 handle:1 embedding:1 coordinate:3 justification:2 analogous:1 laplace:1 construction:1 suppose:1 rip:6 elucidates:1 exact:2 element:9 expensive:1 particularly:1 observed:2 bottom:1 solved:1 trade:3 removed:1 overfitted:1 principled:4 balanced:2 environment:1 inaccessible:1 solving:8 tight:1 localization:3 basis:3 completely:1 necessitates:2 easily:1 k0:2 mization:1 represented:1 genre:1 regularizer:1 fast:1 shortcoming:1 sejnowski:1 artificial:1 choosing:2 whose:1 heuristic:3 posed:1 solve:4 valued:1 widely:2 supplementary:3 elad:1 compensates:1 statistic:1 nondecreasing:1 itself:1 jointly:1 noisy:1 kurtosis:1 analytical:1 reconstruction:5 relevant:1 poorly:1 achieve:1 academy:1 intuitive:1 inducing:1 ky:4 convergence:1 etj:1 asymmetry:1 assessing:1 cluster:5 produce:4 generating:1 requirement:1 executing:1 leave:1 optimum:1 help:2 derive:2 develop:1 illustrate:1 ard:1 strong:1 soc:1 recovering:1 involves:5 implies:1 faul:1 met:1 girolami:1 direction:1 correct:1 germane:1 material:3 implementing:1 argued:1 nagarajan:1 transparent:3 clustered:6 gorodnitsky:1 secondly:1 extension:3 hold:2 sufficiently:4 exp:5 presumably:1 nonorthogonal:1 kxkpp:1 dictionary:21 achieves:1 xk2:3 purpose:8 estimation:10 proc:4 combinatorial:1 currently:1 rvm:8 create:1 reflects:1 weighted:2 minimization:10 hope:1 cotter:1 gaussian:12 always:3 super:1 modified:1 pn:1 ej:2 fluctuate:1 shrinkage:1 focus:3 evotte:1 properly:3 improvement:1 rank:1 likelihood:20 indicates:1 bernoulli:2 contrast:3 rigorous:1 seeger:1 helpful:1 posteriori:1 inference:3 dependent:3 initially:1 expressible:1 expand:1 tao:1 provably:1 mimicking:1 overall:1 dual:12 classification:7 ill:1 arg:12 favored:1 priori:1 development:1 integration:1 special:3 mutual:1 marginal:2 equal:2 once:1 never:2 mackay:1 ng:1 identical:1 represents:1 nearly:1 carin:1 promote:1 wipf:4 minimized:2 others:1 np:1 future:1 employ:2 few:2 randomly:1 national:1 individual:1 microsoft:1 interest:3 highly:4 possibility:1 righthand:1 evaluation:1 severe:1 weakness:1 violation:2 analyzed:1 yielding:1 tj:2 integral:2 encourage:1 necessary:1 incomplete:1 taylor:1 hyperprior:3 re:1 plotted:1 overcomplete:2 theoretical:1 minimal:1 column:7 rao:5 implementational:1 maximization:1 cost:13 introducing:1 addressing:1 entry:2 subset:1 snr:1 successful:3 too:2 optimally:1 dependency:1 nickisch:1 peak:1 fundamental:1 siam:1 retain:2 probabilistic:1 off:3 lee:1 concrete:1 vastly:2 squared:1 opposed:1 leveraged:1 possibly:1 marginalizes:1 conf:2 leading:3 return:1 potential:1 diversity:1 student:1 coefficient:16 int:3 satisfy:4 explicitly:1 blind:2 later:1 view:7 closed:2 analyze:2 recover:3 complicated:1 slope:2 minimize:1 square:1 variance:6 characteristic:1 chartrand:1 bayesian:14 accurately:1 produced:1 iid:2 whenever:3 definition:2 failure:3 grossly:1 nonetheless:1 frequency:1 pp:15 naturally:6 associated:2 dxi:2 proof:2 mi:2 popular:3 lim:1 emerges:2 knowledge:1 actually:2 back:1 originally:2 tipping:2 asia:1 methodology:3 maximally:6 reflected:1 formulation:1 evaluated:1 shrink:1 strongly:1 though:2 done:1 just:3 implicit:1 mar:1 correlation:1 replacing:2 defines:1 logistic:2 building:1 k22:2 normalized:2 true:3 equality:1 regularization:1 symmetric:1 iteratively:2 nonzero:7 neal:1 reweighted:10 encourages:1 noted:1 whereby:1 criterion:1 generalized:7 stress:2 complete:1 demonstrate:4 tn:1 performs:2 meaning:2 wise:5 consideration:2 variational:2 recently:1 superior:1 empirically:1 regulates:1 physical:1 ji:1 tail:1 analog:1 discussed:1 expressing:1 cambridge:1 phillips:1 framed:1 automatic:1 tuning:1 pm:1 access:1 longer:1 similarity:1 etc:1 feb:1 posterior:3 isometry:2 recent:2 dictated:1 perspective:2 multivariate:1 irrelevant:1 certain:2 verlag:1 success:1 yi:1 accomplished:2 minimum:7 unrestricted:1 additional:3 somewhat:1 kxk0:6 fortunately:1 accomplishes:1 determine:1 redundant:1 signal:9 ii:63 full:2 multiple:2 desirable:1 reduces:1 conducive:1 faster:1 determination:1 match:1 offer:1 cross:2 prescription:1 promotes:1 plugging:1 laplacian:2 variant:3 regression:4 basic:1 essentially:1 circumstance:1 enhancing:1 blindly:1 iteration:1 sometimes:3 tailored:1 achieved:1 want:1 source:5 crucial:1 unlike:1 strict:1 virtually:1 facilitates:2 db:1 seem:1 effectiveness:1 call:2 structural:1 near:2 leverage:2 presence:1 identically:1 concerned:1 undertake:1 marginalization:2 fit:3 zi:1 lasso:2 suboptimal:1 reduce:1 idea:4 regarding:2 tradeoff:2 absent:1 whether:1 motivated:1 handled:1 expression:1 engan:2 akin:1 penalty:11 mattout:1 render:1 accompanies:1 speaking:1 proceed:1 hessian:1 speech:1 york:1 useful:4 generally:1 iterating:1 clear:1 demonstrably:1 exist:1 affords:1 canonical:1 estimated:3 tibshirani:1 stipulates:1 hyperparameter:1 vol:14 express:1 group:2 reformulation:1 achieving:1 drawn:1 utilize:1 advancing:1 imaging:1 convert:1 cone:1 sum:1 angle:1 inverse:1 uncertainty:1 wu:1 separation:2 submatrix:1 bound:4 guaranteed:2 convergent:1 correspondence:1 quadratic:1 untenable:1 constraint:2 bp:1 afforded:2 x2:7 unverifiable:1 fourier:1 min:14 formulating:1 performing:1 rendered:2 relatively:2 structured:3 developing:1 march:3 poor:1 conjugate:1 underutilized:1 across:4 slightly:1 em:2 wi:2 appealing:2 modification:1 jeffreys:4 intuitively:1 restricted:2 handling:1 xk22:1 equation:1 previously:3 resource:1 mutually:1 count:1 remains:1 fail:1 mechanism:1 conjugacy:1 tractable:2 serf:1 wellspecified:1 adopted:1 pursuit:1 gaussians:1 apply:3 observe:1 promoting:1 appropriate:1 alternative:4 original:6 assumes:2 denotes:2 include:2 top:1 x21:1 ensure:1 wakin:1 giving:1 murray:1 february:1 society:1 implied:1 objective:3 already:2 occurs:1 strategy:1 primary:2 diagonal:2 said:1 exhibit:1 minx:2 unclear:1 reason:4 assuming:2 meg:1 index:2 relationship:1 mini:1 balance:1 minimizing:1 equivalently:2 difficult:2 mostly:1 unfortunately:1 dunson:1 negative:3 godsill:1 design:2 implementation:1 proper:1 motivates:1 unknown:4 perform:2 anal:1 upper:2 benchmark:1 situation:2 extended:3 incorporated:2 rn:2 ninth:1 sharp:1 davidwipf:1 david:1 required:2 connection:3 compensatory:1 coherent:1 learned:3 herein:2 discontinuity:1 trans:4 address:4 proceeds:1 below:1 xm:1 regime:1 sparsity:15 program:1 max:6 including:1 interpretability:1 royal:1 natural:5 treated:1 force:1 predicting:1 friston:1 improve:1 x2i:1 brief:1 created:1 sept:1 deviate:1 prior:17 understanding:2 literature:1 relative:1 embedded:1 expect:1 highlight:1 generation:1 proportional:1 versus:3 validation:3 integrate:2 degree:1 sufficient:1 principle:1 uncorrelated:1 balancing:1 heavy:1 row:1 elsewhere:1 penalized:4 figueiredo:1 drastically:1 side:1 formal:1 explaining:1 sparse:43 penny:1 distributed:1 curve:3 dimension:2 attendant:2 transition:1 collection:1 commonly:1 adaptive:1 transaction:1 approximate:1 implicitly:2 preferred:2 ml:3 global:6 reveals:1 kreutz:3 x2m:1 assumed:2 knew:1 xi:12 conclude:1 rugg:1 search:1 latent:5 iterative:5 nerual:1 reviewed:1 additionally:3 robust:1 obtaining:3 symmetry:3 expansion:1 mse:5 necessarily:2 diag:2 substituted:1 significance:1 bounding:1 noise:7 hyperparameters:5 allowed:1 x1:1 referred:1 fashion:1 neuroimage:1 explicit:1 candidate:2 kxk2:1 weighting:1 theorem:7 kuk2:1 transitioning:1 specific:1 showing:1 sensing:4 explored:1 virtue:1 evidence:2 derives:1 intractable:3 workshop:1 adding:1 effectively:1 importance:1 corr:3 magnitude:1 sparseness:1 kx:1 sparser:2 easier:1 generalizing:2 yin:1 simply:3 saddle:1 likely:1 visual:1 conveniently:2 expressed:3 kxk:3 springer:1 satisfies:2 chance:1 succeed:2 conditional:1 goal:2 viewed:1 donoho:1 replace:3 hard:1 specifically:1 except:2 infinite:1 admittedly:1 called:1 duality:1 experimental:2 premium:1 unfavorable:1 succeeds:1 e:2 mark:1 support:3 relevance:4 violated:1 correlated:2 |
3,875 | 4,508 | Supervised Learning with Similarity Functions
Purushottam Kar
Indian Institute of Technology
Kanpur, INDIA
[email protected]
Prateek Jain
Microsoft Research Lab
Bangalore, INDIA
[email protected]
Abstract
We address the problem of general supervised learning when data can only be accessed through an (indefinite) similarity function between data points. Existing
work on learning with indefinite kernels has concentrated solely on binary/multiclass classification problems. We propose a model that is generic enough to handle
any supervised learning task and also subsumes the model previously proposed for
classification. We give a ?goodness? criterion for similarity functions w.r.t. a given
supervised learning task and then adapt a well-known landmarking technique to
provide efficient algorithms for supervised learning using ?good? similarity functions. We demonstrate the effectiveness of our model on three important supervised learning problems: a) real-valued regression, b) ordinal regression and c)
ranking where we show that our method guarantees bounded generalization error.
Furthermore, for the case of real-valued regression, we give a natural goodness
definition that, when used in conjunction with a recent result in sparse vector recovery, guarantees a sparse predictor with bounded generalization error. Finally,
we report results of our learning algorithms on regression and ordinal regression
tasks using non-PSD similarity functions and demonstrate the effectiveness of
our algorithms, especially that of the sparse landmark selection algorithm that
achieves significantly higher accuracies than the baseline methods while offering
reduced computational costs.
1
Introduction
The goal of this paper is to develop an extended framework for supervised learning with similarity
functions. Kernel learning algorithms [1] have become the mainstay of discriminative learning with
an incredible amount of effort having been put in, both from the theoretician?s as well as the practitioner?s side. However, these algorithms typically require the similarity function to be a positive
semi-definite (PSD) function, which can be a limiting factor for several applications. Reasons being:
1) the Mercer?s condition is a formal statement that is hard to verify, 2) several natural notions of
similarity that arise in practical scenarios are not PSD, and 3) it is not clear as to why an artificial
constraint like PSD-ness should limit the usability of a kernel.
Several recent papers have demonstrated that indefinite similarity functions can indeed be successfully used for learning [2, 3, 4, 5]. However, most of the existing work focuses on classification tasks
and provides specialized techniques for the same, albeit with little or no theoretical guarantees. A
notable exception is the line of work by [6, 7, 8] that defines a goodness criterion for a similarity
function and then provides an algorithm that can exploit this goodness criterion to obtain provably
accurate classifiers. However, their definitions are yet again restricted to the problem of classification as they take a ?margin? based view of the problem that requires positive points to be more
similar to positive points than to negative points by at least a constant margin.
In this work, we instead take a ?target-value? point of view and require that target values of similar
points be similar. Using this view, we propose a generic goodness definition that also admits the
1
goodness definition of [6] for classification as a special case. Furthermore, our definition can be seen
as imposing the existence of a smooth function over a generic space defined by similarity functions,
rather than over a Hilbert space as required by typical goodness definitions of PSD kernels.
We then adapt the landmarking technique of [6] to provide an efficient algorithm that reduces learning tasks to corresponding learning problems over a linear space. The main technical challenge at
this stage is to show that such reductions are able to provide good generalization error bounds for
the learning tasks at hand. To this end, we consider three specific problems: a) regression, b) ordinal
regression, and c) ranking. For each problem, we define appropriate surrogate loss functions, and
show that our algorithm is able to, for each specific learning task, guarantee bounded generalization
error with polynomial sample complexity. Moreover, by adapting a general framework given by
[9], we show that these guarantees do not require the goodness definition to be overly restrictive by
showing that our definitions admit all good PSD kernels as well.
For the problem of real-valued regression, we additionally provide a goodness definition that captures the intuition that usually, only a small number of landmarks are influential w.r.t. the learning
task. However, to recover these landmarks, the uniform sampling technique would require sampling
a large number of landmarks thus increasing the training/test time of the predictor. We address this
issue by applying a sparse vector recovery algorithm given by [10] and show that the resulting sparse
predictor still has bounded generalization error.
We also address an important issue faced by algorithms that use landmarking as a feature constructions step viz [6, 7, 8], namely that they typically assume separate landmark and training sets for ease
of analysis. In practice however, one usually tries to overcome paucity of training data by reusing
training data as landmark points as well. We use an argument outlined in [11] to theoretically justify
such ?double dipping? in our case. The details of the argument are given in Appendix B.
We perform several experiments on benchmark datasets that demonstrate significant performance
gains for our methods over the baseline of kernel regression. Our sparse landmark selection technique provides significantly better predictors that are also more efficient at test time.
Related Work: Existing approaches to extend kernel learning algorithms to indefinite kernels can
be classified into three broad categories: a) those that use indefinite kernels directly with existing
kernel learning algorithms, resulting in non-convex formulations [2, 3]. b) those that convert a given
indefinite kernel into a PSD one by either projecting onto the PSD-cone [4, 5] or performing other
spectral operations [12]. The second approach is usually expensive due to the spectral operations
involved apart from making the method inherently transductive. Moreover, any domain knowledge
stored in the original kernel is lost due to these task oblivious operations and consequently, no
generalization guarantees can be given. c) those that use notions of ?task-kernel alignment? or
equivalently, notions of ?goodness? of a kernel, to give learning algorithms [6, 7, 8]. This approach
enjoys several advantages over the other approaches listed above. These models are able to use
the indefinite kernel directly with existing PSD kernel learning techniques; all the while retaining
the ability to give generalization bounds that quantitatively parallel those of PSD kernel learning
models. In this paper, we adopt the third approach for general supervised learning problem.
2
Problem formulation and Preliminaries
The goal in similarity-based supervised learning is to closely approximate a target predictor y :
X ! Y over some domain X using a hypothesis f?( ? ; K) : X ! Y that restricts its interaction
with data points to computing similarity values given by K. Now, if the similarity function K is
not discriminative enough for the given task then we cannot hope to construct a predictor out of it
that enjoys good generalization properties. Hence, it is natural to define the ?goodness? of a given
similarity function with respect to the learning task at hand.
Definition 1 (Good similarity function: preliminary). Given a learning task y : X ! Y over some
distribution D, a similarity function K : X ? X ! R is said to be (?0 , B)-good with respect to
this task if there exists some bounded weighing function w : X ! [ B, B] such that for at least a
(1 ?0 ) D-fraction of the domain, we have y(x) = 0E Jw(x0 )y(x0 )K(x, x0 )K .
x ?D
The above definition is inspired by the definition of a ?good? similarity function with respect to
classification tasks given in [6]. However, their definition is tied to class labels and thus applies only
2
Algorithm 1 Supervised learning with Similarity functions
Input: A target predictor y : X ! Y over a distribution D, an (?0 , B)-good similarity function K, labeled
training points sampled from D: T = (xt1 , y1 ), . . . , (xtn , yn ) , loss function `S : R ? Y ! R+ .
Output: A predictor f? : X ! R with bounded true loss over D
1: Sample d unlabeled landmarks from D: L = xl1 , . . . , xld
// Else subsample
d landmarks from T (see Appendix B for details)
p
2: L : x 7! 1/ d K(x, xl1 ), . . . , K(x, xld ) 2 Rd
?
?
Pn
? = arg min
3: w
w, L (xti ) , yi
i `S
w2Rd :kwk2 ?B
?
4: return f? : x 7! hw,
L (x)i
to classification tasks. Similar to [6], the above definition calls a similarity function K ?good? if the
target value y(x) of a given point x can be approximated in terms of (a weighted combination of)
the target values of the K-?neighbors? of x. Also, note that this definition automatically enforces a
smoothness prior on the framework.
However the above definition is too rigid. Moreover, it defines goodness in terms of violations, a
non-convex loss function. To remedy this, we propose an alternative definition that incorporates an
arbitrary (but in practice always convex) loss function.
Definition 2 (Good similarity function: final). Given a learning task y : X ! Y over some
distribution D, a similarity function K is said to be (?0 , B)-good with respect to a loss function
`S : R ? Y ! R if there exists some bounded weighing function w : X ! [ B, B] such that if we
define a predictor as f (x) := 0E Jw(x0 )K(x, x0 )K, then we have E J`S (f (x), y(x))K ? ?0 .
x ?D
x?D
Note that Definition 2 reduces to Definition 1 for `S (a, b) = 1{a6=b} . Moreover, for the case of
binary classification where y 2 { 1, +1}, if we take `S (a, b) = 1{ab?B } , then we recover the
(?0 , )-goodness definition of a similarity function, given in Definition 3 of [6]. Also note that,
assuming sup {|y(x)|} < 1 we can w.l.o.g. merge w(x0 )y(x0 ) into a single term w(x0 ).
x2X
Having given this definition we must make sure that ?good? similarity functions allow the construction of effective predictors (Utility property). Moreover, we must make sure that the definition does
not exclude commonly used PSD kernels (Admissibility property). Below, we formally define these
two properties and in later sections, show that for each of the learning tasks considered, our goodness
definition satisfies these two properties.
2.1
Utility
Definition 3 (Utility). A similarity function K is said to be ?0 -useful w.r.t. a loss function `actual (?, ?)
if the following holds: there exists a learning algorithm A that, for any ?1 , > 0, when given
poly(1/?1 , log(1/ )) ?labeled? and ?unlabeled? samples from the
D, with probr input?distribution?z
ability at least 1
, generates a hypothesis f?(x; K) s.t. E `actual f?(x), y(x)
? ?0 + ?1 .
x?D
Note that f?(x; K) is restricted to access the data solely through K.
Here, the ?0 term captures the misfit or the bias of the similarity function with respect to the learning
problem. Notice that the above utility definition allows for learning from unlabeled data points and
thus puts our approach in the semi-supervised learning framework.
All our utility guarantees proceed by first using unlabeled samples as landmarks to construct a landmarked space. Next, using the goodness definition, we show the existence of a good linear predictor
in the landmarked space. This guarantee is obtained in two steps as outlined in Algorithm 1: first of
all we choose d unlabeled landmark points and construct a map : X ! Rd (see Step 1 of Algorithm 1) and show that there exists a linear predictor over Rd that closely approximates the predictor
f used in Definition 2 (see Lemma 15 in Appendix A). In the second step, we learn a predictor (over
the landmarked space) using ERM over a fresh labeled training set (see Step 3 of Algorithm 1). We
then use individual task-specific arguments and Rademacher average-based generalization bounds
[13] thus proving the utility of the similarity function.
3
2.2
Admissibility
In order to show that our models are not too rigid, we would prove that they admit good PSD
kernels. The notion of a good PSD kernel for us will be one that corresponds to a prevalent large
margin technique for the given problem. In general, most notions correspond to the existence of a
linear operator in the RKHS of the kernel that has small loss at large margin. More formally,
Definition 4 (Good PSD Kernel). Given a learning task y : X ! Y over some distribution D, a
PSD kernel K : X ?X ! R with associated RKHS HK and canonical feature map K : X ! HK
is said to be (?0 , )-good with respect to a loss function `K : R ? Y ! R if there exists W? 2 HK
such that kW? k = 1 and
s ?
?{
hW? , K (x)i
E `K
, y(x)
< ?0
x?D
We will show, for all the learning tasks considered, that every (?0 , )-good PSD kernel, when treated
as simply a similarity function with no consideration of its RKHS, is also (? + ?1 , B)-good for
arbitrarily small ?1 with B = h( , ?1 ) for some function h. To prove these results we will adapt
techniques introduced in [9] with certain modifications and task-dependent arguments.
3
Applications
We will now instantiate the general learning model described above to real-valued regression, ordinal
regression and ranking by providing utility and admissibility guarantees. Due to lack of space, we
relegate all proofs as well as the discussion on ranking to the supplementary material (Appendix F).
3.1
Real-valued Regression
Real-valued regression is a quintessential learning problem [1] that has received a lot of attention
in the learning literature. In the following we shall present algorithms for performing real-valued
regression using non-PSD similarity measures. We consider the problem with `actual (a, b) = |a b|
as the true loss function. For the surrogates `S and `K , we choose the ?-insensitive loss function [1]
defined as follows:
?
0,
if |a b| < ?,
`? (a, b) = `? (a b) =
|a b| ?,
otherwise.
The above loss function automatically gives us notions of good kernels and similarity functions by
appealing to Definitions 4 and 2 respectively. It is easy to transfer error bounds in terms of absolute
error to those in terms of mean squared error (MSE), a commonly used performance measure for
real-valued regression. See Appendix D for further discussion on the choice of the loss function.
Using the landmarking strategy described in Section 2.1, we can reduce the problem of real regression to that of a linear regression problem in the landmarked
Pn space. More specifically, the ERM step
in Algorithm 1 becomes the following: arg min
yi ).
i `? (hw, L (xi )i
w2Rd :kwk2 ?B
There exist solvers (for instance [14]) to efficiently solve the above problem on linear spaces. Using
proof techniques sketched in Section 2.1 along with specific arguments for the ?-insensitive loss, we
can prove generalization guarantees and hence utility guarantees for the similarity function.
Theorem 5. Every similarity function that is (?0 , B)-good for a regression problem with respect
to the insensitive loss function `? (?, ?) is (?0 + ?)-useful with respect to absolute loss as well as
(B?0 + B?)-useful with respect to mean squared error. Moreover, both the dimensionality
? 2
?of the
B
1
landmarked space as well as the labeled sample complexity can be bounded by O ?2 log .
1
We are also able to prove the following (tight) admissibility result:
Theorem
6.?Every??PSD kernel that is (?0 , )-good for a regression problem is, for any ?1 > 0,
?
?0 + ?1 , O
1
?1 2
-good as a similarity function as well. Moreover, for any ?1 < 1/2 and any
< 1, there exists a regression instance and a corresponding kernel that ?is (0,? )-good for the
1
?1 2 .
regression problem but only (?1 , B)-good as a similarity function for B = ?
4
3.2
Sparse regression models
An artifact of a random choice of landmarks is that very few of them might turn out to be ?informative? with respect to the prediction problem at hand. For instance, in a network, there might exist
hubs or authoritative nodes that yield rich information about the learning problem. If the relative
abundance of such nodes is low then random selection would compel us to choose a large number
of landmarks before enough ?informative? ones have been collected.
However this greatly increases training and testing times due to the increased costs of constructing
the landmarked space. Thus, the ability to prune away irrelevant landmarks would speed up training
and test routines. We note that this issue has been addressed before in literature [8, 12] by way
of landmark selection heuristics. In contrast, we guarantee that our predictor will select a small
number of landmarks while incurring bounded generalization error. However this requires a careful
restructuring of the learning model to incorporate the ?informativeness? of landmarks.
Definition 7. A similarity function K is said to be (?0 , B, ? )-good for a real-valued regression
problem y : X ! R if for some bounded weight function w : X ! [ B, B] and choice function
R : X ! {0, 1} with E JR(x)K = ? , the predictor f : x 7! 0E Jw(x0 )K(x, x0 )|R(x0 )K has
x?D
bounded ?-insensitive loss i.e. E J`? (f (x), y(x))K < ?0 .
x ?D
x?D
The role of the choice function is to single out informative landmarks, while ? specifies the relative
density of informative landmarks. Note that the above definition is similar in spirit to the goodness
definition presented in [15]. While the motivation behind [15] was to give an improved admissibility result for binary classification, we squarely focus on the utility guarantees; with the aim of
accelerating our learning algorithms via landmark pruning.
We prove the utility guarantee in three steps as outlined in Appendix D. First, we use the usual
landmarking step to project the problem onto a linear space. This step guarantees the following:
Theorem 8. Given a similarity function that is?(?0 , B, ? )-good
for a regression problem, there exists
?
a randomized map
2
: X ! Rd for d = O ?B?2 log 1 such that with probability at least 1
,
1
d
there exists a linear operator f? : x 7! hw, xi over R such that kwk ? B with ?-insensitive loss
1
bounded by ?0 + ?1 . Moreover, with the same confidence we have kwk0 ?
3d?
2 .
Our proof follows that of [15], however we additionally prove sparsity of w as well. The number of
landmarks required here is a ? (1/? ) fraction greater than that required by Theorem 5. This formally
captures the intuition presented earlier of a small fraction of dimensions (read landmarks) being actually relevant to the learning problem. So, in the second step, we use the Forward Greedy Selection
algorithm given in [10] to learn a sparse predictor. The use of this learning algorithm necessitates
the use of a different generalization bound in the final step to complete the utility guarantee given
below. We refer the reader to Appendix D for the details of the algorithm and its utility analysis.
Theorem 9. Every similarity function that is (?0 , B, ? )-good for a regression problem with respect
to the insensitive loss function `? (?, ?) is (?0 + ?)-useful with respect
loss as well; with the
? 2 to absolute
?
dimensionality of the landmarked space being bounded by O ?B?2 log 1 and the labeled sampled
1
? 2
?
complexity being bounded by O B?2 log ?B
.
Moreover,
this
utility
can be achieved by an O (? )1
1
sparse predictor on the landmarked space.
We note that the improvements obtained here by using the sparse learning methods of [10] provide
? (? ) increase in sparsity. We now prove admissibility results for this sparse learning model. We
do this by showing that the dense model analyzed in Theorem 5 and that given in Definition 7 are
interpretable in each other for an appropriate selection of parameters. The guarantees in Theorem 6
can then be invoked to conclude the admissibility proof.
w
?
Theorem 10. Every (?0 , B)-good similarity function K is also ?0 , B, B
-good where w
? =
E J|w(x)|K. Moreover, every (?0 , B, ? )-good similarity function K is also (?0 , B/? )-good.
x?D
Using Theorem 6, we immediately have the following corollary:
Corollary
11.
?
? Every
? PSD
? kernel that is (?0 , )-good for a regression problem is, for any ?1 > 0,
?0 + ?1 , O
1
?1 2
, 1 -good as a similarity function as well.
5
3.3
Ordinal Regression
The problem of ordinal regression requires an accurate prediction of (discrete) labels coming from
a finite ordered set [r] = {1, 2, . . . , r}. The problem is similar to both classification and regression,
but has some distinct features due to which it has received independent attention [16, 17] in domains
such as product ratings etc. The most popular performance measure for this problem is the absolute
loss which is the absolute difference between the predicted and the true labels.
A natural and rather tempting way to solve this problem is to relax the problem to real-valued
regression and threshold the output of the learned real-valued predictor using predefined thresholds
b1 , . . . , br to get discrete labels. Although this approach has been prevalent in literature [17], as the
discussion in the supplementary material shows, this leads to poor generalization guarantees in our
model. More specifically, a goodness definition constructed around such a direct reduction is only
able to ensure (?0 + 1)-utility i.e. the absolute error rate is always greater than 1.
One of the reasons for this is the presence of the thresholding operation that makes it impossible to
distinguish between instances that would not be affected by small perturbations to the underlying
real-valued predictor and those that would. To remedy this, we enforce a (soft) margin with respect
to thresholding that makes the formulation more robust to noise. More formally, we expect that if
a point belongs to the label i, then in addition to being sandwiched between the thresholds bi and
bi+1 , it should be separated from these by a margin as well i.e. bi + ? f (x) ? bi+1
.
This is a direct generalization of the margin principle in classification where we expect w> x > b+
for positively labeled points and w> x < b
for negatively labeled points. Of course, wherein
classification requires a single threshold, we require several, depending upon the number of labels.
For any x 2 R, let [x]+ = max {x, 0}. Thus, if we define the -margin loss function to be [x] :=
[
x]+ (note that this is simply the well known hinge loss function scaled by a factor of ), we
can define our goodness criterion as follows:
Definition 12. A similarity function K is said to be (?0 , B)-good for an ordinal regression problem
y : X ! [r] if for some bounded weight function w : X ! [ B, B] and some (unknown but fixed)
r
set of thresholds {bi }i=1 with b1 = 1, the predictor f : x 7! 0E Jw(x0 )K(x, x0 )K satisfies
x ?D
r?
?
?
? z
E
f (x) by(x) + by(x)+1 f (x)
< ?0 .
x?D
We now give utility guarantees for our learning model. We shall give guarantees on both the misclassification error as well as the absolute error of our learned predictor. We say that a set of points
x1 , . . . , xi . . . is -spaced if min {|xi xj |}
. Define the function
(x) = x+ 1 .
i6=j
Theorem 13. Let K be a similarity function that is (?0 , B)-good for an ordinal regression problem with
to -spaced thresholds and -margin loss. Let ? = max { , 1}. Then?K ?is
? respect
?
?0
-useful
with respect to ordinal regression error (absolute loss). Moreover, K is ??0 ( /? )
?
useful with respect to the zero-one mislabeling error as well.
We can
both
? bound,
? dimensionality of the landmarked space as well as labeled sampled complexity,
B2
1
by O ?2 log . Notice that for ?0 < 1 and large enough d, n, we can ensure that the ordinal
1
regression error rate is also bounded above by 1 since
sup
(
(x)) = 1. This is in contrast
x2[0,1], >0
with the direct reduction to real valued regression which has ordinal regression error rate bounded
below by 1. This indicates the advantage of the present model over a naive reduction to regression.
We can show that our definition of a good similarity function admits all good PSD kernels as well.
The kernel goodness criterion we adopt corresponds to the large margin framework proposed by
[16]. We refer the reader to Appendix E.3 for the definition and give the admissibility result below.
Theorem
14. ?Every??
PSD kernel that is (?0 , )-good for an ordinal regression problem is also
?
1 ?0
+ ?1 , O
2
1
?1
2
-good as a similarity function with respect to the
1 -margin
loss for any
1 , ?1 > 0. Moreover, for any ?1 < 1 /2, there exists an ordinal regression instance and a corresponding kernel that is (0, )-good for the ordinal regression problem but
? only? (?1 , B)-good as a
similarity function with respect to the
1 -margin
loss function for B = ?
6
2
1
?1
2
.
(a) Mean squared error for landmarking (RegLand), sparse landmarking (RegLand-Sp) and kernel regression (KR)
(b) Avg. absolute error for landmarking (ORLand) and kernel regression (KR) on ordinal regression datasets
Figure 1: Performance of landmarking algorithms with increasing number of landmarks on realvalued regression (Figure 1a) and ordinal regression (Figure 1b) datasets.
Datasets
Abalone [18]
N = 4177
d=8
Bodyfat [19]
N = 252
d = 14
CAHousing [19]
N = 20640
d=8
CPUData [20]
N = 8192
d = 12
PumaDyn-8 [20]
N = 8192
d=8
PumaDyn-32 [20]
N = 8192
d = 32
KR
Sigmoid kernel
Land-Sp
Manhattan kernel
KR
Land-Sp
2.1e-02
(8.3e-04)
6.2e-03
(8.4e-04)
1.7e-02
(7.1e-04)
6.0e-03
(3.7e-04)
4.6e-04
(6.5e-05)
9.5e-05
(1.3e-04)
3.9e-04
(2.2e-05)
3.5e-05
(1.3e-05)
5.9e-02
(2.3e-04)
1.6e-02
(6.2e-04)
5.8e-02
(1.9e-04)
1.5e-02
(1.4e-04)
4.1e-02
(1.6e-03)
1.4e-03
(1.7e-04)
4.3e-02
(1.6e-03)
1.2e-03
(3.2e-05)
2.3e-01
(4.6e-03)
1.4e-02
(4.5e-04)
2.3e-01
(4.5e-03)
1.4e-02
(4.8e-04)
1.8e-01
(3.6e-03)
1.4e-02
(3.7e-04)
1.8e-01
(3.6e-03)
1.4e-02
(3.1e-04)
Datasets
Wine-Red [18]
N = 1599
d = 11
Wine-White [18]
N = 4898
d = 11
Bank-8 [20]
N = 8192
d=8
Bank-32 [20]
N = 8192
d = 32
House-8 [20]
N = 22784
d=8
House-16 [20]
N = 22784
d = 16
(a) Mean squared error for real regression
KR
Sigmoid kernel
ORLand
Manhattan kernel
KR
ORLand
6.8e-01
(2.8e-02)
4.2e-01
(3.8e-02)
6.7e-01
(3.0e-02)
4.5e-01
(3.2e-02)
6.2e-01
(2.0e-02)
8.9e-01
(8.5e-01)
6.2e-01
(2.0e-02)
4.9e-01
(1.5e-02)
2.9e+0
(6.2e-02)
6.1e-01
(4.4e-02)
2.7e+0
(6.6e-02)
6.3e-01
(1.7e-02)
2.7e+0
(1.2e-01)
1.6e+0
(2.3e-02)
2.6e+0
(8.1e-02)
1.6e+0
(9.4e-02)
2.8e+0
(9.3e-03)
1.5e+0
(2.0e-02)
2.7e+0
(1.0e-02)
1.4e+0
(1.2e-02)
2.7e+0
(2.0e-02)
1.5e+0
(1.0e-02)
2.8e+0
(2.0e-02)
1.4e+0
(2.3e-02)
(b) Mean absolute error for ordinal regression
Table 1: Performance of landmarking-based algorithms (with 50 landmarks) vs. baseline kernel
regression (KR). Values in parentheses indicate standard deviation values. Values in the first columns
indicate dataset source (in parentheses), size (N) and dimensionality (d).
Due to lack of space we refer the reader to Appendix F for a discussion on ranking models that
includes utility and admissibility guarantees with respect to the popular NDCG loss.
4
Experimental Results
In this section we present an empirical evaluation of our learning models for the problems of realvalued regression and ordinal regression on benchmark datasets taken from a variety of sources
[18, 19, 20]. In all cases, we compare our algorithms against kernel regression (KR), a well known
technique [21] for non-linear regression, whose predictor is of the form:
P
y(xi )K(x, xi )
i 2T
.
f : x 7! xP
xi 2T K(x, xi )
where T is the training set. We selected KR as the baseline as it is a popular regression method that
does not require similarity functions to be PSD. For ordinal regression problems, we rounded off the
result of the KR predictor to get a discrete label. We implemented all our algorithms as well as the
7
baseline KR method in Matlab. In all our experiments we report results across 5 random splits on
the (indefinite) Sigmoid: K(x, y) = tanh(a hx, yi + r) and Manhattan: K(x, y) = kx yk1
kernels. Following standard practice, we fixed r = 1 and a = 1/dorig for the Sigmoid kernel
where dorig is the dimensionality of the dataset.
Real valued regression: For this experiment, we compare our methods (RegLand and RegLand-Sp)
with the KR method. For RegLand, we constructed the landmarked space as specified in Algorithm 1
and learned a linear predictor using the LIBLINEAR package [14] that minimizes ?-insensitive
loss. In the second algorithm (RegLand-Sp), we used the sparse learning algorithm of [10] on the
landmarked space to learn the best predictor for a given sparsity level. Due to its simplicity and
good convergence properties, we implemented the Fully Corrective version of the Forward Greedy
Selection algorithm with squared loss as the surrogate.
We evaluated all methods using Mean Squared Error (MSE) on the test set. Figure 1a shows the MSE
incurred by our methods along with reference values of accuracies obtained by KR as landmark sizes
increase. The plots clearly show that our methods incur significantly lesser error than KR. Moreover,
RegLand-Sp learns more accurate predictors using the same number of landmarks. For instance,
when learning using the Sigmoid kernel on the CPUData dataset, at 20 landmarks, RegLand is able
to guarantee an MSE of 0.016 whereas RegLand-Sp offers an MSE of less than 0.02 ; MLKR is
only able to guarantee an MSE rate of 0.04 for this dataset. In Table 1a, we compare accuracies of
the two algorithms when given 50 landmark points with those of KR for the Sigmoid and Manhattan
kernels. We find that in all cases, RegLand-Sp gives superior accuracies than KR. Moreover, the
Manhattan kernel seems to match or outperform the Sigmoid kernel on all the datasets.
Ordinal Regression: Here, we compare our method with the baseline KR method on benchmark
datasets. As mentioned in Section 3.3, our method uses the EXC formulation of [16] along with
landmarking scheme given in Algorithm 1. We implemented a gradient descent-based solver (ORLand) to solve the primal formulation of EXC and used fixed equi-spaced thresholds instead of
learning them as suggested by [16]. Of the six datasets considered here, the two Wine datasets are
ordinal regression datasets where the quality of the wine is to be predicted on a scale from 1 to 10.
The remaining four datasets are regression datasets whose labels were subjected to equi-frequency
binning to obtain ordinal regression datasets [16]. We measured the average absolute error (AAE)
for each method. Figure 1b compares ORLand with KR as the number of landmarks increases. Table 1b compares accuracies of ORLand for 50 landmark points with those of KR for Sigmoid and
Manhattan kernels. In almost all cases, ORLand gives a much better performance than KR. The
Sigmoid kernel seems to outperform the Manhattan kernel on a couple of datasets.
We refer the reader to Appendix G for additional experimental results.
5
Conclusion
In this work we considered the general problem of supervised learning using non-PSD similarity
functions. We provided a goodness criterion for similarity functions w.r.t. various learning tasks.
This allowed us to construct efficient learning algorithms with provable generalization error bounds.
At the same time, we were able to show, for each learning task, that our criterion is not too restrictive
in that it admits all good PSD kernels. We then focused on the problem of identifying influential
landmarks with the aim of learning sparse predictors. We presented a model that formalized the
intuition that typically only a small fraction of landmarks is influential for a given learning problem.
We adapted existing sparse vector recovery algorithms within our model to learn provably sparse
predictors with bounded generalization error. Finally, we empirically evaluated our learning algorithms on benchmark regression and ordinal regression tasks. In all cases, our learning methods,
especially the sparse recovery algorithm, consistently outperformed the kernel regression baseline.
An interesting direction for future research would be learning good similarity functions a? la metric
learning or kernel learning. It would also be interesting to conduct large scale experiments on realworld data such as social networks that naturally capture the notion of similarity amongst nodes.
Acknowledgments
P. K. is supported by a Microsoft Research India Ph.D. fellowship award. Part of this work was done
while P. K. was an intern at Microsoft Research Labs India, Bangalore.
8
References
[1] Bernhard Sch?olkopf and Alex J. Smola. Learning with Kernels : Support Vector Machines, Regularization,
Optimization, and Beyond. MIT Press, 2002.
[2] Bernard Haasdonk. Feature Space Interpretation of SVMs with Indefinite Kernels. IEEE Transactions on
Pattern Analysis and Machince Intelligence, 27(4):482?492, 2005.
[3] Cheng Soon Ong, Xavier Mary, St?ephane Canu, and Alexander J. Smola. Learning with non-positive
Kernels. In 21st Annual International Conference on Machine Learning, 2004.
[4] Yihua Chen, Maya R. Gupta, and Benjamin Recht. Learning Kernels from Indefinite Similarities. In 26th
Annual International Conference on Machine Learning, pages 145?152, 2009.
[5] Ronny Luss and Alexandre d?Aspremont. Support Vector Machine Classification with Indefinite Kernels.
In 21st Annual Conference on Neural Information Processing Systems, 2007.
[6] Maria-Florina Balcan and Avrim Blum. On a Theory of Learning with Similarity Functions. In 23rd
Annual International Conference on Machine Learning, pages 73?80, 2006.
[7] Liwei Wang, Cheng Yang, and Jufu Feng. On Learning with Dissimilarity Functions. In 24th Annual
International Conference on Machine Learning, pages 991?998, 2007.
[8] Purushottam Kar and Prateek Jain. Similarity-based Learning via Data Driven Embeddings. In 25th Annual
Conference on Neural Information Processing Systems, 2011.
[9] Nathan Srebro. How Good Is a Kernel When Used as a Similarity Measure? In 20th Annual Conference
on Computational Learning Theory, pages 323?335, 2007.
[10] Shai Shalev-Shwartz, Nathan Srebro, and Tong Zhang. Trading Accuracy for Sparsity in Optimization
Problems with Sparsity Constraints. SIAM Journal on Optimization, 20(6):2807?2832, 2010.
[11] Nathan Srebro Shai Ben-David, Ali Rahimi. Generalization Bounds for Indefinite Kernel Machines. In
NIPS 2008 Workshop: New Challenges in Theoretical Machine Learning, 2008.
[12] Yihua Chen, Eric K. Garcia, Maya R. Gupta, Ali Rahimi, and Luca Cazzanti. Similarity-based Classification: Concepts and Algorithms. Journal of Machine Learning Research, 10:747?776, 2009.
[13] Sham M. Kakade, Karthik Sridharan, and Ambuj Tewari. On the Complexity of Linear Prediction :
Risk Bounds, Margin Bounds, and Regularization. In 22nd Annual Conference on Neural Information
Processing Systems, 2008.
[14] Chia-Hua Ho and Chih-Jen Lin. Large-scale Linear Support Vector Regression. http://www.csie.
ntu.edu.tw/?cjlin/papers/linear-svr.pdf, retrieved on May 18, 2012, 2012.
[15] Maria-Florina Balcan, Avrim Blum, and Nathan Srebro. Improved Guarantees for Learning via Similarity
Functions. In 21st Annual Conference on Computational Learning Theory, pages 287?298, 2008.
[16] Wei Chu and S. Sathiya Keerthi. Support Vector Ordinal Regression. Neural Computation, 19(3):792?
815, 2007.
[17] Shivani Agarwal. Generalization Bounds for Some Ordinal Regression Algorithms. In 19th International
Conference on Algorithmic Learning Theory, pages 7?21, 2008.
[18] A. Frank and Arthur Asuncion. UCI Machine Learning Repository. http://archive.ics.uci.
edu/ml, 2010. University of California, Irvine, School of Information and Computer Sciences.
[19] StatLib Dataset Repository. http://lib.stat.cmu.edu/datasets/. Carnegie Mellon University.
[20] Delve Dataset Repository. http://www.cs.toronto.edu/?delve/data/datasets.html.
University of Toronto.
[21] Kilian Q. Weinberger and Gerald Tesauro. Metric Learning for Kernel Regression. In 11th International
Conference on Artificial Intelligence and Statistics, pages 612?619, 2007.
9
| 4508 |@word repository:3 version:1 polynomial:1 seems:2 nd:1 liblinear:1 reduction:4 offering:1 rkhs:3 existing:6 com:1 yet:1 chu:1 must:2 informative:4 compel:1 landmarked:11 plot:1 interpretable:1 v:1 greedy:2 instantiate:1 weighing:2 selected:1 intelligence:2 theoretician:1 incredible:1 provides:3 equi:2 cse:1 node:3 toronto:2 accessed:1 purushot:1 zhang:1 along:3 constructed:2 direct:3 become:1 prove:7 x0:13 theoretically:1 indeed:1 inspired:1 automatically:2 little:1 xti:1 actual:3 solver:2 increasing:2 becomes:1 project:1 kwk0:1 bounded:18 moreover:14 underlying:1 provided:1 lib:1 jufu:1 prateek:2 minimizes:1 orland:7 machince:1 guarantee:24 every:8 classifier:1 scaled:1 yn:1 positive:4 before:2 limit:1 mainstay:1 solely:2 merge:1 ndcg:1 might:2 delve:2 ease:1 bi:5 practical:1 acknowledgment:1 enforces:1 testing:1 practice:3 lost:1 definite:1 empirical:1 significantly:3 adapting:1 liwei:1 confidence:1 get:2 onto:2 cannot:1 selection:7 unlabeled:5 operator:2 put:2 ronny:1 applying:1 impossible:1 risk:1 www:2 map:3 demonstrated:1 attention:2 convex:3 focused:1 simplicity:1 recovery:4 immediately:1 identifying:1 formalized:1 proving:1 handle:1 notion:7 limiting:1 target:6 construction:2 us:1 hypothesis:2 expensive:1 approximated:1 labeled:8 yk1:1 binning:1 role:1 csie:1 haasdonk:1 capture:4 wang:1 kilian:1 mentioned:1 intuition:3 benjamin:1 complexity:5 ong:1 gerald:1 tight:1 ali:2 incur:1 negatively:1 upon:1 eric:1 necessitates:1 various:1 corrective:1 separated:1 jain:2 distinct:1 effective:1 artificial:2 shalev:1 whose:2 heuristic:1 supplementary:2 valued:14 solve:3 say:1 relax:1 otherwise:1 ability:3 statistic:1 transductive:1 mislabeling:1 final:2 advantage:2 propose:3 interaction:1 coming:1 product:1 cazzanti:1 relevant:1 uci:2 olkopf:1 convergence:1 double:1 rademacher:1 ben:1 depending:1 develop:1 ac:1 stat:1 measured:1 school:1 received:2 implemented:3 predicted:2 c:1 indicate:2 trading:1 direction:1 closely:2 material:2 require:6 hx:1 generalization:18 preliminary:2 ntu:1 hold:1 around:1 considered:4 ic:1 algorithmic:1 achieves:1 adopt:2 wine:4 outperformed:1 label:8 xld:2 tanh:1 successfully:1 weighted:1 hope:1 mit:1 clearly:1 always:2 aim:2 rather:2 pn:2 conjunction:1 corollary:2 focus:2 viz:1 improvement:1 consistently:1 prevalent:2 indicates:1 maria:2 hk:3 greatly:1 contrast:2 baseline:7 dependent:1 rigid:2 typically:3 provably:2 sketched:1 arg:2 issue:3 classification:14 html:1 retaining:1 ness:1 special:1 construct:4 having:2 sampling:2 kw:1 broad:1 future:1 report:2 ephane:1 quantitatively:1 bangalore:2 oblivious:1 few:1 individual:1 keerthi:1 microsoft:4 karthik:1 psd:24 ab:1 evaluation:1 alignment:1 violation:1 analyzed:1 behind:1 primal:1 predefined:1 accurate:3 arthur:1 conduct:1 theoretical:2 instance:6 increased:1 earlier:1 soft:1 column:1 goodness:20 a6:1 cost:2 deviation:1 predictor:29 uniform:1 too:3 stored:1 st:4 density:1 international:6 randomized:1 recht:1 siam:1 off:1 rounded:1 squarely:1 pumadyn:2 again:1 squared:6 choose:3 admit:2 return:1 reusing:1 exclude:1 b2:1 subsumes:1 includes:1 notable:1 ranking:5 later:1 view:3 try:1 lab:2 lot:1 kwk:1 sup:2 red:1 recover:2 parallel:1 shai:2 asuncion:1 accuracy:6 efficiently:1 correspond:1 yield:1 spaced:3 misfit:1 lu:1 xtn:1 classified:1 definition:39 against:1 frequency:1 involved:1 naturally:1 associated:1 proof:4 couple:1 gain:1 sampled:3 dataset:6 irvine:1 popular:3 knowledge:1 dimensionality:5 hilbert:1 routine:1 actually:1 alexandre:1 higher:1 supervised:12 wherein:1 improved:2 wei:1 jw:4 formulation:5 evaluated:2 done:1 furthermore:2 stage:1 smola:2 hand:3 lack:2 defines:2 yihua:2 artifact:1 quality:1 mary:1 verify:1 true:3 remedy:2 concept:1 xavier:1 hence:2 regularization:2 read:1 white:1 abalone:1 criterion:7 pdf:1 complete:1 demonstrate:3 balcan:2 consideration:1 invoked:1 sigmoid:9 superior:1 specialized:1 probr:1 empirically:1 insensitive:7 extend:1 interpretation:1 approximates:1 kwk2:2 significant:1 refer:4 mellon:1 imposing:1 smoothness:1 rd:5 outlined:3 canu:1 i6:1 access:1 similarity:57 etc:1 recent:2 purushottam:2 retrieved:1 irrelevant:1 apart:1 belongs:1 scenario:1 driven:1 certain:1 tesauro:1 kar:2 binary:3 arbitrarily:1 yi:3 seen:1 greater:2 additional:1 prune:1 tempting:1 semi:2 sham:1 reduces:2 rahimi:2 smooth:1 technical:1 usability:1 adapt:3 match:1 offer:1 chia:1 lin:1 luca:1 award:1 parenthesis:2 prediction:3 regression:66 florina:2 metric:2 cmu:1 kernel:60 agarwal:1 achieved:1 addition:1 whereas:1 fellowship:1 x2x:1 addressed:1 else:1 source:2 sch:1 archive:1 sure:2 incorporates:1 spirit:1 effectiveness:2 sridharan:1 practitioner:1 call:1 presence:1 yang:1 split:1 enough:4 easy:1 embeddings:1 variety:1 xj:1 reduce:1 lesser:1 multiclass:1 br:1 six:1 utility:16 accelerating:1 effort:1 proceed:1 matlab:1 useful:6 tewari:1 clear:1 listed:1 amount:1 ph:1 concentrated:1 svms:1 category:1 shivani:1 reduced:1 http:4 specifies:1 outperform:2 exist:2 restricts:1 canonical:1 notice:2 overly:1 discrete:3 carnegie:1 shall:2 affected:1 indefinite:12 four:1 threshold:7 blum:2 fraction:4 convert:1 cone:1 realworld:1 package:1 almost:1 reader:4 chih:1 appendix:10 bound:11 maya:2 distinguish:1 aae:1 cheng:2 annual:9 adapted:1 constraint:2 alex:1 x2:1 generates:1 nathan:4 speed:1 argument:5 min:3 performing:2 influential:3 combination:1 poor:1 jr:1 across:1 appealing:1 kakade:1 tw:1 making:1 modification:1 projecting:1 restricted:2 erm:2 taken:1 previously:1 turn:1 cjlin:1 ordinal:25 subjected:1 prajain:1 end:1 operation:4 w2rd:2 incurring:1 away:1 generic:3 appropriate:2 spectral:2 enforce:1 alternative:1 weinberger:1 ho:1 existence:3 original:1 remaining:1 ensure:2 hinge:1 paucity:1 exploit:1 restrictive:2 especially:2 sandwiched:1 feng:1 strategy:1 usual:1 surrogate:3 said:6 gradient:1 amongst:1 separate:1 landmark:32 exc:2 collected:1 reason:2 fresh:1 provable:1 assuming:1 providing:1 equivalently:1 statement:1 frank:1 negative:1 unknown:1 perform:1 svr:1 datasets:17 benchmark:4 finite:1 descent:1 extended:1 y1:1 perturbation:1 arbitrary:1 rating:1 introduced:1 david:1 namely:1 required:3 specified:1 california:1 learned:3 nip:1 address:3 able:8 suggested:1 beyond:1 usually:3 below:4 pattern:1 sparsity:5 challenge:2 ambuj:1 max:2 misclassification:1 natural:4 treated:1 scheme:1 technology:1 realvalued:2 aspremont:1 naive:1 faced:1 prior:1 literature:3 relative:2 manhattan:7 loss:30 admissibility:9 expect:2 fully:1 interesting:2 srebro:4 authoritative:1 incurred:1 xp:1 informativeness:1 mercer:1 thresholding:2 principle:1 bank:2 land:2 landmarking:11 statlib:1 course:1 supported:1 soon:1 enjoys:2 side:1 formal:1 allow:1 bias:1 india:4 institute:1 neighbor:1 absolute:11 sparse:17 overcome:1 dimension:1 rich:1 forward:2 commonly:2 avg:1 social:1 transaction:1 approximate:1 pruning:1 iitk:1 bernhard:1 ml:1 b1:2 xt1:1 conclude:1 sathiya:1 discriminative:2 quintessential:1 xi:8 shwartz:1 why:1 table:3 additionally:2 learn:4 transfer:1 robust:1 inherently:1 mse:6 poly:1 constructing:1 domain:4 sp:8 main:1 dense:1 motivation:1 subsample:1 arise:1 noise:1 allowed:1 positively:1 x1:1 tong:1 house:2 tied:1 third:1 learns:1 hw:4 abundance:1 kanpur:1 theorem:11 specific:4 jen:1 showing:2 hub:1 admits:3 gupta:2 exists:9 workshop:1 albeit:1 avrim:2 kr:20 dissimilarity:1 margin:13 kx:1 chen:2 garcia:1 simply:2 relegate:1 intern:1 restructuring:1 ordered:1 xl1:2 applies:1 hua:1 corresponds:2 satisfies:2 goal:2 consequently:1 careful:1 hard:1 typical:1 specifically:2 justify:1 lemma:1 bernard:1 experimental:2 la:1 exception:1 formally:4 select:1 bodyfat:1 support:4 alexander:1 indian:1 incorporate:1 |
3,876 | 4,509 | Query Complexity of Derivative-Free Optimization
Kevin G. Jamieson
University of Wisconsin
Madison, WI 53706, USA
Robert D. Nowak
University of Wisconsin
Madison, WI 53706, USA
Benjamin Recht
University of Wisconsin
Madison, WI 53706, USA
[email protected]
[email protected]
[email protected]
Abstract
This paper provides lower bounds on the convergence rate of Derivative Free Optimization (DFO) with noisy function evaluations, exposing a fundamental and
unavoidable gap between the performance of algorithms with access to gradients
and those with access to only function evaluations. However, there are situations
in which DFO is unavoidable, and for such situations we propose a new DFO algorithm that is proved to be near optimal for the class of strongly convex objective
functions. A distinctive feature of the algorithm is that it uses only Boolean-valued
function comparisons, rather than function evaluations. This makes the algorithm
useful in an even wider range of applications, such as optimization based on paired
comparisons from human subjects, for example. We also show that regardless of
whether DFO is based on noisy function evaluations or Boolean-valued function
comparisons, the convergence rate is the same.
1
Introduction
Optimizing large-scale complex systems often requires the tuning of many parameters. With training data or simulations one can evaluate the relative merit, or incurred loss, of different parameter
settings, but it may be unclear how each parameter influences the overall objective function. In such
cases, derivatives of the objective function with respect to the parameters are unavailable. Thus,
we have seen a resurgence of interest in Derivative Free Optimization (DFO) [1, 2, 3, 4, 5, 6, 7, 8].
When function evaluations are noiseless, DFO methods can achieve the same rates of convergence
as noiseless gradient methods up to a small factor depending on a low-order polynomial of the dimension [9, 5, 10]. This leads one to wonder if the same equivalence can be extended to the case
when function evaluations and gradients are noisy.
Sadly, this paper proves otherwise.
p We show that when function evaluations are noisy, the optimization error of any DFO is ?( 1/T ), where T is the number of evaluations. This lower bound
holds even for strongly convex functions. In contrast, noisy gradient methods exhibit ?(1/T ) error
scaling for strongly convex functions [9, 11]. A consequence of our theory is that finite differencing
cannot achieve the rates of gradient methods when the function evaluations are noisy.
On the positive side, we also present a new derivative-free algorithm that achieves this lower bound
with near optimal dimension dependence. Moreover, the algorithm uses only boolean comparisons
of function values, not actual function values. This makes the algorithm applicable to situations in
which the optimization is only able to probably correctly decide if the value of one configuration is
better than the value of another. This is especially interesting in optimization based on human subject
feedback, where paired comparisons are often used instead of numerical scoring. The convergence
rate of the new algorithm is optimal in terms of T and near-optimal in terms of its dependence
on the ambient dimension. Surprisingly, our lower bounds show that this new algorithm that uses
only function comparisons achieves the same rate in terms of T as any algorithm that has access to
function evaluations.
1
2
Problem formulation and background
We now formalize the notation and conventions for our analysis of DFO. A function f is strongly
convex with constant ? on a convex set B ? Rd if there exists a constant ? > 0 such that
?
f (y) f (x) + hrf (x), y xi + ||x y||2
2
for all x, y 2 B. The gradient of f , if it exists, denoted rf , is Lipschitz with constant L if
||rf (x) rf (y)|| ? L||x y|| for some L > 0. The class of strongly convex functions with
Lipschitz gradients defined on a nonempty, convex set B ? Rn which take their minimum in B with
parameters ? and L is denoted by F?,L,B .
The problem we consider is minimizing a function f 2 F?,L,B . The function f is not explicitly
known. An optimization procedure may only query the function in one of the following two ways.
Function Evaluation Oracle: For any point x 2 B an optimization procedure can observe
Ef (x) = f (x) + w
where w 2 R is a random variable with E[w] = 0 and E[w2 ] = 2 .
Function Comparison Oracle: For any pair of points x, y 2 B an optimization procedure can
observe a binary random variable Cf (x, y) satisfying
1
+ min 0 , ?|f (y) f (x)|? 1
(1)
2
for some 0 < 0 ? 1/2, ? > 0 and ?
1. When ? = 1, without loss of generality
assume ? ? 0 ? 1/2. Note ? = 1 implies that the comparison oracle is correct with
a probability that is greater than 1/2 and independent of x, y. If ? > 1, then the oracle?s
reliability decreases as the difference between f (x) and f (y) decreases.
P (Cf (x, y) = sign{f (y)
f (x)})
To illustrate how the function comparison oracle and function evaluation oracles relate to each other,
suppose Cf (x, y) = sign{Ef (y) Ef (x)} where Ef (x) is a function evaluation oracle with additive noise w. If w is Gaussian distributed with mean zero and variance 2 then ? = 2 and
1/2
?
4? 2 e
(see supplementary materials). In fact, this choice of w corresponds to Thurston?s
law of comparative judgment which is a popular model for outcomes of pairwise comparisons from
human subjects [12]. If w is a ?spikier? distribution such as a two-sided Gamma distribution with
shape parameter in the range of (0, 1] then all values of ? 2 (1, 2] can be realized (see supplementary
materials).
Interest in the function comparison oracle is motivated by certain popular derivative-free optimization procedures that use only comparisons of function evaluations (e.g. [7]) and by optimization
problems involving human subjects making paired comparisons (for instance, getting fitted for prescription lenses or a hearing aid where unknown parameters specific to each person are tuned with
the familiar queries ?better or worse??). Pairwise comparisons have also been suggested as a novel
way to tune web-search algorithms [13]. Pairwise comparison strategies have previously been analyzed in the finite setting where the task is to identify the best alternative among a finite set of
alternatives (sometimes referred to as the dueling-bandit problem) [13, 14]. The function comparison oracle presented in this work and its analysis are novel. The main contributions of this work
and new art are as follows (i) lower bounds for the function evaluation oracle in the presence of
measurement noise (ii) lower bounds for the function comparison oracle in the presence of noise
and (iii) an algorithm for the function comparison oracle, which can also be applied to the function
evaluation oracle setting, that nearly matches both the lower bounds of (i) and (ii).
We prove our lower bounds for strongly convex functions with Lipschitz gradients defined on a compact, convex set B, and because these problems are a subset of those involving all convex functions
(and have non-empty intersection with problems where f is merely Lipschitz), the lower bound also
applies to these larger classes. While there are known theoretical results for DFO in the noiseless
setting [15, 5, 10], to the best of our knowledge we are the first to characterize lower bounds for
DFO in the stochastic setting. Moreover, we believe we are the first to show a novel upper bound for
stochastic DFO using a function comparison oracle (which also applies to the function evaluation
oracle). However, there are algorithms with upper bounds on the rates of convergence for stochastic
2
DFO with the function evaluation oracle [15, 16]. We discuss the relevant results in the next section
following the lower bounds .
While there remains many open problems in stochastic DFO (see Section 6), rates of convergence
with a stochastic gradient oracle are well known and were first lower bounded by Nemirovski and
Yudin [15]. These classic results were recently tightened to show a dependence on the dimension
of the problem [17]. And then tightened again to show a better dependence on the noise [11] which
matches the upper bound achieved by stochastic gradient descent [9]. The aim of this work is to
start filling in the knowledge gaps of stochastic DFO so that it is as well understood as the stochastic
gradient oracle. Our bounds are based on simple techniques borrowed from the statistical learning
literature that use natural functions and oracles in the same spirit of [11].
3
Main results
The results below are presented with simplifying constants that encompass many factors to aid in
exposition. Explicit constants are given in the proofs in Sections 4 and 5. Throughout, we denote
the minimizer of f as x?f . The expectation in the bounds is with respect to the noise in the oracle
queries and (possible) optimization algorithm randomization.
3.1
Query complexity of the function comparison oracle
Theorem 1. For every f 2 F?,L,B let Cf be a function comparison oracle with parameters
(?, ?, 0 ). Then for n 8 and sufficiently large T
8
<c1 exp
c2 Tn
if ? = 1
?
?
?
inf sup E f (b
xT ) f (xf )
1
:
x
bT f 2F?,L,B
c3 Tn 2(? 1)
if ? > 1
where the infimum is over the collection of all possible estimators of x?f using at most T queries to
a function comparison oracle and the supremum is taken with respect to all problems in F?,L,B and
function comparison oracles with parameters (?, ?, 0 ). The constants c1 , c2 , c3 depend the oracle
and function class parameters, as well as the geometry of B, but are independent of T and n.
For upper bounds we propose a specific algorithm based on coordinate-descent in Section 5 and
prove the following theorem for the case of unconstrained optimization, that is, B = Rn .
Theorem 2. For every f 2 F?,L,B with B = Rn let Cf be a function comparison oracle with
parameters (?, ?, 0 ). Then there exists a coordinate-descent algorithm that is adaptive to unknown
? 1 that outputs an estimate x
bT after T function comparison queries such that with probability
1
8
q o
n
>
<c1 exp
c2 Tn
if ? = 1
?
?
sup E f (b
xT ) f (x?f ) ?
1
>
f 2F?,L,B
:
c3 n Tn 2(? 1)
if ? > 1
where c1 , c2 , c3 depend the oracle and function class parameters as well as T ,n, and 1/ , but only
poly-logarithmically.
3.2
Query complexity of the function evaluation oracle
Theorem 3. For every f 2 F?,L,B let Ef be a function evaluation oracle with variance
for n 8 and sufficiently large T
? 2 ? 12
?
?
n
?
inf sup E f (b
xT ) f (xf )
c
x
bT f 2F?,L,B
T
2
. Then
where the infimum is taken with respect to the collection of all possible estimators of x?f using just
T queries to a function evaluation oracle and the supremum is taken with respect to all problems in
F?,L,B and function evaluation oracles with variance 2 . The constant c depends on the oracle and
function class parameters, as well as the geometry of B, but is independent of T and n.
3
Because a function evaluation oracle can always be turned into a function comparison oracle (see
discussion above), the algorithm and upper bound in Theorem 2 with ? = 2 applies to many typical
1/2
function evaluation oracles (e.g. additive Gaussian noise), yielding an upper bound of n3 2 /T
ignoring constants and log factors. This matches the rate of convergence as a function of T and 2 ,
but has worse dependence on the dimension n.
Alternatively, under a less restrictive setting, Nemirovski and Yudin proposed two algorithms for
the class of convex, Lipschitz functions that obtain rates of n1/2 /T 1/4 and p(n)/T 1/2 , respectively,
where p(n) was left as an unspecified polynomial of n [15]. While focusing on stochastic DFO with
bandit feedback, Agarwal et. al. built on the ideas developed in [15] to obtain a result that they
point out implies a convergence rate of n16 /T 1/2 in the optimization setting considered here [16].
Whether or not these rates can be improved to those obtained under the more restrictive function
classes of above is an open question.
A related but fundamentally different problem that is somewhat related with the setting considered
in this paper is described as online (or stochastic) convex optimization with multi-point feedback
[18, 5, 19]. Essentially, this setting allows the algorithm to probe the value of the function f plus
noise at multiple locations where the noise changes at each time step, but each set of samples at each
time experiences the same noise. Because the noise model of that work is incompatible with the one
considered here, no comparisons should be made between the two.
4
Lower Bounds
The lower bounds in Theorems 1 and 3 are proved using a general minimax bound [20, Thm. 2.5].
Our proofs are most related to the approach developed in [21] for active learning, which like optimization involves a Markovian sampling process. Roughly speaking, the lower bounds are established by considering a simple case of the optimization problem in which the global minimum is
known a priori to belong to a finite set. Since the simple case is ?easier? than the original optimization, the minimum number of queries required for a desired level of accuracy in this case yields a
lower bound for the original problem.
The following theorem is used to prove the bounds. In the terms of the theorem, f is a function to
be minimized and Pf is the probability model governing the noise associated with queries when f
is the true function.
Theorem 4. [20, Thm. 2.5] Consider a class of functions F and an associated family of probability
measures {Pf }f 2F . Let M
2 be an integer and f0 , f1 , . . . , fM be functions in F. Let d(?, ?) :
F ? F ! R be a semi-distance and assume that:
1. d(fi , fj ) 2s > 0, for all 0 ? i < j ? M ,
PM
1
2. M
j=1 KL(Pi ||P0 ) ? a log M ,
R
where the Kullback-Leibler divergence KL(Pi ||P0 ) := log
(i.e., P0 is a dominating measure) and 0 < a < 1/8 . Then
inf sup P(d(fb, f )
fb f 2F
s)
inf
max
fb f 2{f0 ,...,fM }
P(d(fb, f )
s)
dPi
dP0 dPi
is assumed to be well-defined
p
M
p
1+ M
?
1
2a
2
where the infimum is taken over all possible estimators based on a sample from Pf .
q
a
log M
?
> 0,
We are concerned with the functions in the class F := F?,L,B . The volume of B will affect only
constant factors in our bounds, so we will simply denote the class of functions by F and refer
explicitly to B only when necessary. Let xf := arg minx f (x), for all f 2 F. The semi-distance we
use is d(f, g) := kxf xg ||, for all f, g 2 F. Note that each point in B can be specified by one of
many f 2 F. So the problem of selecting an f is equivalent to selecting a point x 2 B. Indeed, the
semi-distance defines a collection of equivalence classes in F (i.e., all functions having a minimum
at x 2 B are equivalent). For every f 2 F we have inf g2F f (xg ) = inf x2B f (x), which is a useful
identity to keep in mind.
We now construct the functions f0 , f1 , . . . , fM that will be used for our proofs. Let ? = { 1, 1}n so
that each ! 2 ? is a vertex of the d-dimensional hypercube. Let V ? ? with cardinality |V| 2n/8
4
such that for all ! 6= ! 0 2 V, we have ?(!, ! 0 ) n/8 where ?(?, ?) is the Hamming distance. It is
known that such a set exists by the Varshamov-Gilbert bound [20, Lemma 2.9]. Denote the elements
of V by !0 , !1 , . . . , !M . Next we state some elementary bounds on the functions that will be used
in our analysis.
Lemma 1. For ? > 0 define the set B ? Rn to be the `1 ball of radius ? and define the functions
on B: fi (x) := ?2 ||x ?!i ||2 , for i = 0, . . . , M , !i 2 V, and xi := arg minx fi (x) = ?!i . Then
for all 0 ? i < j ? M and x 2 B the functions fi (x) satisfy
1. fi is strongly convex-? with Lipschitz-? gradients and xi 2 B
p
2. ||xi xj || ? n2
3. |fi (x)
fj (x)| ? 2? n?2 .
We are now ready to prove Theorems 1 and 3. Each proof uses the functions f0 , . . . , fM a bit
differently, and since the noise model is also different in each case, the KL divergence is bounded
differently in each proof. We use the fact that if X and Y are random variables distributed according
to Bernoulli distributions PX and PY with parameters 1/2 + ? and 1/2 ?, then KL(PX ||PY ) ?
4?2 /(1/2 ?). Also, if X ? N (?X , 2 ) =: PX and Y ? N (?Y , 2 ) =: Py then KL(PX ||PY ) =
1
?Y ||2 .
2 2 ||?X
4.1
Proof of Theorem 1
First we will obtain the bound for the case ? > 1. Let the comparison oracle satisfy
P (Cfi (x, y) = sign{fi (y)
fi (x)}) =
1
+ min ?|fi (y)
2
fi (x)|?
1
,
0
.
In words, Cfi (x, y) is correct with probability as large as the right-hand-side of above and is
monotonic increasing in fi (y) fi (x). Let {xk , yk }Tk=1 be a sequence of T pairs in B and let
{Cfi (xk , yk )}Tk=1 be the corresponding sequence of noisy comparisons. We allow the sequence
{xk , yk }Tk=1 to be generated in any way subject to the Markovian assumption that Cfi (xk , yk ) given
(xk , yk ) is conditionally independent of {xi , yi }i<k . For i = 0, . . . , M , and ` = 1, . . . , T let Pi,`
denote the joint probability distribution of {xk , yk , Cfi (xk , yk )}`k=1 , let Qi,` denote the conditional
distribution of Cfi (x` , y` ) given (x` , y` ), and let S` denote the conditional distribution of (x` , y` )
given {xk , yk , Cfi (xk , yk )}`k=11 . Note that S` is only a function of the underlying optimization algorithm and does not depend on i.
"
#
"
#
QT
QT
?
Pi,T
`=1 Qi,` S`
`=1 Qi,`
KL(Pi,T ||Pj,T ) = EPi,T log
= EPi,T log QT
= EPi,T log QT
Pj,T
`=1 Qj,` S`
`=1 Qj,`
?
?
?
?
T
X
Qi,`
Qi,1
=
EPi,T EPi,T log
{xk , yk }Tk=1 ? T sup EPi,1 EPi,1 log
x1 , y1
Qj,`
Qj,1
x1 ,y1 2B
`=1
By the second claim of Lemma 1, |fi (x) fj (x)| ? 2? n?2 , and therefore the bound above is
less than or equal to the KL divergence between the Bernoulli distributions with parameters 12 ?
? 2? n?2
(? 1)
, yielding the bound
KL(Pi,T |Pj,T ) ?
4T ?2 2? n?2
1/2
2(? 1)
(? 1)
? (2? n?2 )
? 16T ?2 2? n?2
2(? 1)
provided ? is sufficiently small. We also assume ? (or, equivalently, B) is sufficiently small so that
|fi (x) fj (x)|? 1 ? 0 . We are now ready to apply Theorem 4. Recalling that M
2n/8 , we
want to choose ? such that
n
2(? 1)
KL(Pi,T |Pj,T ) ? 16T ?2 2? n?2
? a log(2) ? a log M
8
with an a small enough so that we can apply the theorem. By setting a = 1/16 and equating the two
?
? 1
1/2
n log(2) 4(? 1)
sides of the equation we have ? = ?T := 2p1 n ?2
(note that this also implies a
2048?2 T
5
sequence of sets BT by the definition of the functions in Lemma 1). Thus, the semi-distance satisfies
? ?1/2 ?
? 1
p
1
2
n log(2) 4(? 1)
p
d(fj , fi ) = ||xj xi ||
n/2?T
=: 2sT .
2048?2 T
2 2 ?
Applying Theorem 4 we have
inf sup P(kxfb
fb f 2F
xf k
sT )
inf
max
P(kxfb
fb i2{0,...,M }
?
p
M
p
1 2a
1+ M
xi k
sT ) = inf
?
q
2 logaM > 1/7 ,
max
fb i2{0,...,M }
P(d(fb, fi )
sT )
where the final inequality holds since M
2 and a = 1/16. Strong convexity implies that f (x)
f (xf ) ?2 ||x xf ||2 for all f 2 F and x 2 B. Therefore
?
?
? 2?
? 2?
inf sup P f (xfb) f (xf )
sT
inf max P fi (xfb) fi (xi )
s
2
2 T
fb f 2F
fb i2{0,...,M }
??
? 2?
inf max P
kxfb xi k2
s
2
2 T
fb i2{0,...,M }
?
?
= inf max P kxfb xi k sT > 1/7 .
Finally, applying Markov?s inequality we have
h
inf sup E f (xfb)
4.2
fb f 2F
f (xf )
fb i2{0,...,M }
i
1
7
?
1
32
??
n log(2)
.
2048?2 T
? 2(?1
1)
Proof of Theorem 1 for ? = 1
To handle the case when ? = 1 we use functions of the same form, but the construction is slightly
different. Let ` be a positive integer and let M = `n . Let {?i }M
i=1 be a set of uniformly space points
in B which we define to be the unit cube in Rn , so that k?i ?j k
` 1 for all i 6= j. Define
1
2
fi (x) := ||x ?i || , i = 1, . . . , M . Let s := 2` so that d(fi , fj ) := ||x?i x?j ||
2s. Because
? = 1, we have P (Cfi (x, y) = sign{fi (y) fi (x)}) ? for some ? > 0, all i 2 {1, . . . , M }, and
all x, y 2 B. We bound KL(Pi,T ||Pj,T ) in exactly the same way as we bounded it in Section 4.1
except that now we have Cfi (xk , yk ) ? Bernoulli( 12 + ?) and Cfj (xk , yk ) ? Bernoulli( 12 ?). It
then follows that if we wish to apply the theorem, we want to choose s so that
KL(Pi,T |Pj,T ) ? 2T ?2 /(1/2
?) ? a log M = an log
1
2s
for some a < 1/8. Using the same sequence of steps as in Section 4.1 we have
?
h
i 1 ? 1 ?2
128T ?2
inf sup E f (xfb) f (xf )
exp
.
7 2
n(1/2 ?)
fb f 2F
4.3
Proof of Theorem 3
Let fi for all i = 0, . . . , M be the functions considered in Lemma 1. Recall that the evaluation oracle
is defined to be Ef (x) := f (x) + w, where w is a random variable (independent of all other random
variables under consideration) with E[w] = 0 and E[w2 ] = 2 > 0. Let {xk }nk=1 be a sequence
of points in B ? Rn and let {Ef (xk )}Tk=1 denote the corresponding sequence of noisy evaluations
of f 2 F. For ` = 1, . . . , T let Pi,` denote the joint probability distribution of {xk , Efi (xk )}`k=1 ,
let Qi,` denote the conditional distribution of Efi (xk ) given xk , and let S` denote the conditional
distribution of x` given {xk , Ef (xk )}`k=11 . S` is a function of the underlying optimization algorithm
and does not depend on i. We can now bound the KL divergence between any two hypotheses as in
Section 4.1:
?
?
Qi,1
KL(Pi,T ||Pj,T ) ? T sup EPi,1 EPi,1 log
x1
.
Qj,1
x1 2B
6
To compute a bound, let us assume that w is Gaussian distributed. Then
KL(Pi,T ||Pj,T ) ? T sup KL N (fi (z), 2 )||N (fj (z),
2
)
z2B
T
T
2
sup |fi (z) fj (z)|2 ?
2? n?2
2 2 z2B
2 2
by the third claim of Lemma 1. We then repeat the same procedure as in Section 4.1 to attain
h
i 1 ? 1 ? ? n 2 log(2) ? 12
inf sup E f (xfb) f (xf )
.
7 32
64T
fb f 2F
=
5
Upper bounds
The algorithm that achieves the upper bound using a pairwise comparison oracle is a combination
of standard techniques and methods from the convex optimization and statistical learning literature.
The algorithm is explained in full detail in the supplementary materials, and is summarized as follows. At each iteration the algorithm picks a coordinate uniformly at random from the n possible
dimensions and then performs an approximate line search. By exploiting the fact that the function is strongly convex with Lipschitz gradients, one guarantees using standard arguments that the
approximate line search makes a sufficient decrease in the objective function value in expectation
[22, Ch.9.3]. If the pairwise comparison oracle made no errors then the approximate line search
is accomplished by a binary-search-like scheme, essentially a golden section line-search algorithm
[23]. However, when responses from the oracle are only probably correct we make the line-search
robust to errors by repeating the same query until we can be confident about the true, uncorrupted
direction of the pairwise comparison using a standard procedure from the active learning literature
[24] (a similar technique was also implemented for the bandit setting of derivate-free optimization
[8]). Because the analysis of each component is either known or elementary, we only sketch the
proof here and leave the details to the supplementary materials.
5.1
Coordinate descent
Given a candidate solution xk after k 0 iterations, the algorithm defines a search direction dk = ei
where i is chosen uniformly at random from the possible n dimensions and ei is a vector of all zeros
except for a one in the ith coordinate. We note that while we only analyze the case where the search
direction dk is a coordinate direction, an analysis with the same result can be obtained with dk
chosen uniformly from the unit sphere. Given dk , a line search is then performed to find an ?k 2 R
such that f (xk+1 ) f (xk ) is sufficiently small where xk+1 = xk + ?k dk . In fact, as we will see in
the next section, for some input parameter ? > 0, the line search is guaranteed to return an ?k such
that |?k ?? | ? ? where ?? = min?2R f (xk + dk ?? ). Using the fact that the gradients of f are
Lipschitz (L) we have
L
L
L
f (xk + ?k dk ) f (xk + ?? dk ) ? ||(?k ?? )dk ||2 = |?k ?? |2 ? ? 2 .
2
2
2
hrf (xk ),dk i
If we define ??k =
then we have
L
L
f (xk + ?k dk ) f (xk ) ? f (xk + ?? dk ) f (xk ) + ? 2
2
L 2
hrf (xk ), dk i2
L
? f (xk + ?
? k dk ) f (xk ) + ? ?
+ ?2
2
2L
2
where the last line follows from applying the fact that the gradients are Lipschitz (L). Arranging the
bound and taking the expectation with respect to dk we get
E[||rf (xk )||2 ]
?
E [f (xk+1 ) f (x? )] L2 ? 2 ? E [f (xk ) f (x? )]
? E [f (xk ) f (x? )] 1 4nL
2nL
where the second inequality follows from the fact that f is strongly convex (? ). If we define ?k :=
E [f (xk ) f (x? )] then we equivalently have
?
? ?
?
?
?
? ?
? ?k
2nL2 ? 2
2nL2 ? 2
2nL2 ? 2
?k+1
? 1
?k
? 1
?0
?
4nL
?
4nL
?
which leads to the following result.
7
Theorem 5. Let f 2 F?,L,B with B = Rn . For any ? > 0 assume the line search returns an ?k that
is within ? of the optimal after at most T` (?) queries from the pairwise comparison oracle. If xK is
an estimate of x? = arg minx f (x) after requesting no more than K pairwise comparisons, then
?
?
2 2
f (x0 ) f (x? )
4nL
sup E[f (xK ) f (x? )] ? 4nL? ?
whenever
K
T` (?)
? log
? 2 2nL2 /?
f
where the expectation is with respect to the random choice of dk at each iteration.
p ??
This implies that if we wish supf E[f (xK ) f (x? )] ? ? it suffices to take ? = 4nL
2 so that at
?
?
p ??
f (x0 ) f (x? )
4nL
most ? log
T`
?/2
4nL2 pairwise comparisons are requested.
5.2
Line search
This section is concerned with minimizing a function f (xk +?k dk ) over some ?k 2 R. In particular,
we wish to find an ?k 2 R such that |?k ?? | ? ? where ?? = min?2R f (xk +dk ?? ). First assume
that the function comparison oracle makes no errors. The line search operates by maintaining a pair
of boundary points ?+ , ? such that if at some iterate we have ?? 2 [? , ?+ ] then at the next iterate,
1 +
we are guaranteed that ?? is still contained inside the boundary points but |?+ ? |
? |.
2 |?
+
An initial set of boundary points ? > 0 and ? < 0 are found using simple binary search. Thus,
regardless of how far away or close ?? is, we converge to it exponentially fast. Exploiting the fact
that f is strongly convex (? ) with Lipschitz (L) gradients we can bound how far away or close ??
is from our initial iterate.
Theorem 6. Let f 2 F?,L,B with B = Rn and let Cf be a function comparison oracle that makes
no errors. Let x 2 Rn be an initial position and let d 2 Rn be a search direction with ||d|| = 1. If
?K is an estimate of ?? = arg min? f (x + d?) that is output from the line search after requesting
no more than K pairwise comparisons, then for any ? > 0
?
?
?
|?K ?? | ? ?
whenever
K 2 log2 256L(f (x)? 2 ?f2(x+d ? )) .
5.3
Making the line search robust to errors
Now assume that the responses from the pairwise comparison oracle are only probably correct in
accordance with the model introduced above. Essentially, the robust procedure runs the line search
as if the oracle made no errors except that each time a comparison is needed, the oracle is repeatedly
queried until we can be confident about the true direction of the comparison. This strategy applied
to active learning is well known because of its simplicity and its ability to adapt to unknown noise
conditions [24]. However, we mention that when used in this way, this sampling procedure is known
to be sub-optimal so in practice, one may want to implement a more efficient approach like that of
[21]. Nevertheless, we have the following lemma.
Lemma 2. [24] For any x, y 2 B with P (Cf (x, y) = sign{f (y) f (x)}) = p, with probability
at least 1
the coin-tossing algorithm
of [24]
?
? correctly identifies the sign of E [Cf (x, y)] and
requests no more than
log(2/ )
4|1/2 p|2
log2
log(2/ )
4|1/2 p|2
pairwise comparisons.
It would be convenient if we could simply apply the result of Lemma 2 to our line search procedure.
Unfortunately, if we do this there is no guarantee that |f (y) f (x)| is bounded below so for the
case when ? > 1, it would be impossible to lower bound |1/2 p| in the lemma. To account
for this, we will sample at multiple locations per iteration as opposed to just two in the noiseless
algorithm to ensure that we can always lower bound |1/2 p|. Intuitively, strong convexity ensures
that f cannot be arbitrarily flat so for any three equally spaced points x, y, z on the line dk , if
f (x) is equal to f (y), then it follows that the absolute difference between f (x) and f (z) must be
bounded away from zero. Applying this idea and union bounding over the total number of times
one must call the coin-tossing algorithm, one finds that with probability at least 1
, the total
number of calls
oracle
over the
course of the whole algorithm does
? to the pairwise comparison
?
?
?
?
e nL n 2(? 1) log2 f (x0 ) f (x ) log(n/ ) . By finding a T > 0 that satisfies this
not exceed O
?
?
?
?
?
1
bound for any ? we see that this is equivalent to a rate of O n log(n/ ) Tn 2(? 1) for ? > 1 and
?
n q
o?
T
O exp
c n log(n/
for ? = 1, ignoring polylog factors.
)
8
References
[1] T. Eitrich and B. Lang. Efficient optimization of support vector machine learning parameters
for unbalanced datasets. Journal of computational and applied mathematics, 196(2):425?436,
2006.
[2] R. Oeuvray and M. Bierlaire. A new derivative-free algorithm for the medical image registration problem. International Journal of Modelling and Simulation, 27(2):115?124, 2007.
[3] A.R. Conn, K. Scheinberg, and L.N. Vicente. Introduction to derivative-free optimization,
volume 8. Society for Industrial Mathematics, 2009.
[4] Warren B. Powell and Ilya O. Ryzhov. Optimal Learning. John Wiley and Sons, 2012.
[5] Y. Nesterov. Random gradient-free minimization of convex functions. CORE Discussion
Papers, 2011.
[6] N. Srinivas, A. Krause, S.M. Kakade, and M. Seeger. Gaussian process optimization in the
bandit setting: No regret and experimental design. Arxiv preprint arXiv:0912.3995, 2009.
[7] R. Storn and K. Price. Differential evolution?a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization, 11(4):341?359, 1997.
[8] A. Agarwal, D.P. Foster, D. Hsu, S.M. Kakade, and A. Rakhlin. Stochastic convex optimization
with bandit feedback. Arxiv preprint arXiv:1107.1744, 2011.
[9] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574, 2009.
[10] V. Protasov. Algorithms for approximate calculation of the minimum of a convex function
from its values. Mathematical Notes, 59:69?74, 1996. 10.1007/BF02312467.
[11] M. Raginsky and A. Rakhlin. Information-based complexity, feedback, and dynamics in convex programming. Information Theory, IEEE Transactions on, (99):1?1, 2011.
[12] L.L. Thurstone. A law of comparative judgment. Psychological Review; Psychological Review,
34(4):273, 1927.
[13] Y. Yue, J. Broder, R. Kleinberg, and T. Joachims. The k-armed dueling bandits problem.
Journal of Computer and System Sciences, 2012.
[14] K.G. Jamieson and R.D. Nowak. Active ranking using pairwise comparisons. Neural Information Processing Systems (NIPS), 2011.
[15] A.S. Nemirovsky and D.B. Yudin. Problem complexity and method efficiency in optimization.
1983.
[16] A. Agarwal, D.P. Foster, D. Hsu, S.M. Kakade, and A. Rakhlin. Stochastic convex optimization
with bandit feedback. Arxiv preprint arXiv:1107.1744, 2011.
[17] A. Agarwal, P.L. Bartlett, P. Ravikumar, and M.J. Wainwright. Information-theoretic lower
bounds on the oracle complexity of stochastic convex optimization. Information Theory, IEEE
Transactions on, (99):1?1, 2010.
[18] A. Agarwal, O. Dekel, and L. Xiao. Optimal algorithms for online convex optimization with
multi-point bandit feedback. In Conference on Learning Theory (COLT), 2010.
[19] S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic
programming. 2012.
[20] A.B. Tsybakov. Introduction to nonparametric estimation. Springer Verlag, 2009.
[21] R.M. Castro and R.D. Nowak. Minimax bounds for active learning. Information Theory, IEEE
Transactions on, 54(5):2339?2353, 2008.
[22] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ Pr, 2004.
[23] R.P. Brent. Algorithms for minimization without derivatives. Dover Pubns, 2002.
[24] M. K?aa? ri?ainen. Active learning in the non-realizable case. In Algorithmic Learning Theory,
pages 63?77. Springer, 2006.
9
| 4509 |@word polynomial:2 dekel:1 open:2 simulation:2 nemirovsky:1 simplifying:1 p0:3 pick:1 mention:1 initial:3 configuration:1 selecting:2 tuned:1 lang:1 must:2 john:1 exposing:1 additive:2 numerical:1 shape:1 ainen:1 juditsky:1 xk:46 ith:1 dover:1 core:1 provides:1 location:2 mathematical:1 c2:4 differential:1 prove:4 inside:1 x0:3 pairwise:14 indeed:1 roughly:1 p1:1 multi:2 actual:1 armed:1 pf:3 considering:1 cardinality:1 increasing:1 provided:1 ryzhov:1 moreover:2 notation:1 bounded:5 underlying:2 unspecified:1 developed:2 finding:1 guarantee:2 every:4 golden:1 exactly:1 k2:1 unit:2 medical:1 jamieson:2 positive:2 understood:1 accordance:1 consequence:1 plus:1 zeroth:1 equating:1 equivalence:2 storn:1 nemirovski:3 range:2 practice:1 cfj:1 implement:1 union:1 regret:1 procedure:9 cfi:9 powell:1 attain:1 convenient:1 boyd:1 word:1 get:1 cannot:2 close:2 influence:1 applying:4 impossible:1 py:4 gilbert:1 equivalent:3 ghadimi:1 regardless:2 convex:25 simplicity:1 estimator:3 vandenberghe:1 classic:1 handle:1 thurstone:1 coordinate:6 arranging:1 construction:1 suppose:1 programming:3 us:4 hypothesis:1 logarithmically:1 element:1 satisfying:1 preprint:3 sadly:1 ensures:1 decrease:3 yk:12 benjamin:1 convexity:2 complexity:6 nesterov:1 dynamic:1 engr:1 depend:4 distinctive:1 f2:1 efficiency:1 joint:2 differently:2 epi:9 univ:1 fast:1 query:13 kevin:1 outcome:1 heuristic:1 supplementary:4 valued:2 larger:1 dominating:1 otherwise:1 ability:1 noisy:8 final:1 online:2 sequence:7 propose:2 relevant:1 turned:1 achieve:2 getting:1 exploiting:2 convergence:8 empty:1 comparative:2 leave:1 tk:5 wider:1 depending:1 illustrate:1 polylog:1 qt:4 borrowed:1 strong:2 implemented:1 c:1 involves:1 implies:5 convention:1 direction:6 radius:1 correct:4 stochastic:17 human:4 material:4 f1:2 suffices:1 randomization:1 elementary:2 hold:2 sufficiently:5 considered:4 exp:4 algorithmic:1 claim:2 achieves:3 estimation:1 applicable:1 minimization:2 gaussian:4 always:2 aim:1 rather:1 joachim:1 bernoulli:4 modelling:1 contrast:1 industrial:1 seeger:1 realizable:1 bt:4 bandit:8 overall:1 among:1 arg:4 colt:1 denoted:2 priori:1 art:1 cube:1 equal:2 construct:1 having:1 sampling:2 nearly:1 filling:1 minimized:1 fundamentally:1 gamma:1 divergence:4 familiar:1 geometry:2 n1:1 recalling:1 interest:2 evaluation:26 analyzed:1 nl:9 yielding:2 z2b:2 ambient:1 nowak:4 necessary:1 experience:1 desired:1 varshamov:1 theoretical:1 fitted:1 psychological:2 instance:1 boolean:3 markovian:2 hearing:1 vertex:1 subset:1 wonder:1 characterize:1 dp0:1 confident:2 recht:1 person:1 fundamental:1 st:6 international:1 siam:1 broder:1 ilya:1 again:1 unavoidable:2 opposed:1 choose:2 worse:2 brent:1 derivative:9 return:2 account:1 summarized:1 satisfy:2 explicitly:2 ranking:1 depends:1 performed:1 analyze:1 sup:14 start:1 contribution:1 accuracy:1 variance:3 judgment:2 identify:1 yield:1 spaced:1 nl2:5 whenever:2 definition:1 proof:9 associated:2 hamming:1 hsu:2 proved:2 popular:2 recall:1 knowledge:2 formalize:1 focusing:1 response:2 improved:1 formulation:1 strongly:10 generality:1 just:2 governing:1 until:2 hand:1 sketch:1 web:1 ei:2 defines:2 infimum:3 believe:1 usa:3 true:3 evolution:1 leibler:1 i2:6 conditionally:1 theoretic:1 tn:5 performs:1 fj:8 image:1 consideration:1 ef:8 recently:1 novel:3 fi:25 exponentially:1 volume:2 belong:1 measurement:1 refer:1 cambridge:1 queried:1 tuning:1 rd:1 unconstrained:1 pm:1 mathematics:2 reliability:1 xfb:5 access:3 f0:4 optimizing:1 inf:16 certain:1 nonconvex:1 verlag:1 inequality:3 binary:3 arbitrarily:1 yi:1 accomplished:1 scoring:1 uncorrupted:1 seen:1 minimum:5 greater:1 somewhat:1 tossing:2 converge:1 ii:2 semi:4 encompass:1 multiple:2 full:1 match:3 xf:10 adapt:1 calculation:1 sphere:1 prescription:1 equally:1 ravikumar:1 paired:3 qi:7 involving:2 noiseless:4 expectation:4 essentially:3 arxiv:6 iteration:4 sometimes:1 agarwal:5 achieved:1 c1:4 background:1 want:3 derivate:1 krause:1 w2:2 probably:3 yue:1 subject:5 spirit:1 integer:2 call:2 near:3 presence:2 exceed:1 iii:1 enough:1 concerned:2 iterate:3 affect:1 xj:2 brecht:1 fm:4 idea:2 requesting:2 qj:5 whether:2 motivated:1 bartlett:1 speaking:1 repeatedly:1 useful:2 tune:1 repeating:1 tsybakov:1 nonparametric:1 shapiro:1 sign:6 correctly:2 per:1 conn:1 nevertheless:1 lan:2 wisc:3 pj:8 registration:1 merely:1 raginsky:1 run:1 throughout:1 family:1 decide:1 incompatible:1 scaling:1 bit:1 bound:43 guaranteed:2 oracle:48 n3:1 flat:1 ri:1 kleinberg:1 argument:1 min:5 px:4 according:1 ball:1 combination:1 request:1 slightly:1 son:1 wi:3 kakade:3 making:2 castro:1 explained:1 intuitively:1 pr:1 sided:1 taken:4 equation:1 previously:1 remains:1 discus:1 scheinberg:1 nonempty:1 needed:1 mind:1 merit:1 efi:2 probe:1 observe:2 apply:4 away:3 alternative:2 coin:2 original:2 cf:8 ensure:1 log2:3 maintaining:1 madison:3 restrictive:2 prof:1 especially:1 hypercube:1 society:1 objective:4 question:1 realized:1 strategy:2 dependence:5 unclear:1 exhibit:1 gradient:17 minx:3 distance:5 minimizing:2 differencing:1 x2b:1 equivalently:2 unfortunately:1 robert:1 relate:1 resurgence:1 design:1 unknown:3 upper:8 markov:1 datasets:1 finite:4 descent:4 situation:3 extended:1 y1:2 rn:10 dpi:2 thm:2 introduced:1 pair:3 required:1 kl:15 c3:4 specified:1 established:1 nip:1 able:1 suggested:1 below:2 rf:4 built:1 max:6 dueling:2 wainwright:1 natural:1 minimax:2 scheme:1 identifies:1 xg:2 ready:2 review:2 literature:3 l2:1 relative:1 wisconsin:3 law:2 loss:2 interesting:1 incurred:1 g2f:1 sufficient:1 xiao:1 foster:2 tightened:2 pi:12 course:1 surprisingly:1 repeat:1 free:9 last:1 side:3 allow:1 warren:1 taking:1 absolute:1 distributed:3 feedback:7 dimension:7 boundary:3 yudin:3 fb:15 collection:3 adaptive:1 made:3 far:2 transaction:3 approximate:4 compact:1 kullback:1 supremum:2 keep:1 global:3 active:6 assumed:1 xi:10 alternatively:1 search:20 continuous:1 robust:4 ignoring:2 unavailable:1 requested:1 complex:1 poly:1 main:2 bounding:1 noise:13 whole:1 n2:1 x1:4 referred:1 aid:2 wiley:1 sub:1 position:1 explicit:1 wish:3 candidate:1 hrf:3 third:1 kxf:1 theorem:19 specific:2 xt:3 rakhlin:3 dk:19 exists:4 nk:1 gap:2 easier:1 supf:1 intersection:1 simply:2 contained:1 applies:3 monotonic:1 ch:1 corresponds:1 minimizer:1 satisfies:2 springer:2 aa:1 conditional:4 identity:1 exposition:1 dfo:15 lipschitz:10 price:1 change:1 vicente:1 typical:1 except:3 uniformly:4 operates:1 lemma:10 lens:1 total:2 experimental:1 support:1 unbalanced:1 evaluate:1 srinivas:1 |
3,877 | 451 | Neural Network Diagnosis of Avascular Necrosis
from Magnetic Resonance Images
Armando Manduca
Dept. of Physiology and Biophysics
Mayo Clinic
Rochester, MN 55905
Paul Christy
Dept. of Diagnostic Radiology
Mayo Clinic
Rochester, MN 55905
Richard Ehman
Dept. of Diagnostic Radiology
Mayo Clinic
Rochester, MN 55905
Abstract
A vascular necrosis (AVN) of the femoral head is a common yet potentially serious disorder which can be detected in its very early stages with
magnetic resonance imaging. We have developed multi-layer perceptron
networks, trained with conjugate gradient optimization, which diagnose
AVN from single magnetic resonance images of the femoral head with
100% accuracy on training data and 97% accuracy on test data.
1
INTRODUCTION
Diagnostic radiology may be a very natural field of application for neural networks,
since a simple answer is desired from a complex image, and the learning process
that human experts undergo is to a large extent a supervised learning experience
based on looking at large numbers of images with known interpretations. Although
many workers have applied neural nets to various types of I-dimensional medical
data (e.g. ECG and EEG waveforms) , little work has been done on applying neural
nets to diagnosis directly from medical images.
645
646
Manduca, Christy, and Ehman
We wanted to explore the use of neural networks in diagnostic radiology by (1)
starting with a simple but real diagnostic problem, and (2) using only actual data.
We chose the diagnosis of avascular necrosis from magnetic resonance images as an
ideal initial problem, because: the area in question is small and well-defined, its
size and shape do not vary greatly between individuals, the condition (if present) is
usually visible even at low spatial and gray level resolution on a single image, and
real data is readily available.
Avascular necrosis (AVN) is the deterioration of tissue due to a disruption in the
blood supply. AVN ofthe femoral head (the ball at the upper end of the femur which
fits into the socket formed by the hip bone) is an increasingly common clinical problem, with potentially crippling effects. Since the sole blood supply to the femoral
head in adults traverses the femoral neck, AVN often occurs following hip fracture
(e.g., Bo Jackson). It is now apparent that AVN can also occur as a side effect of
treatment with corticosteroid drugs, which are commonly used for immunosuppression in transplant patients as well as for patients with asthma, rheumatoid arthritis
and other autoimmune diseases. Although the pathogenesis of AVN secondary to
corticosteroid use is not well understood, 6 - 10% of such patients appear to develop the disorder (Ternoven et al., 1990). AVN may be detected with magnetic
resonance imaging (MRI) even in its very early stages, as a low signal region within
the femoral head due to loss of water-containing bone marrow. MRI is expected
to play an important future role in screening patients undergoing corticosteroid
therapy for AVN.
2
METHODOLOGY
The data set selected for analysis consisted of 125 sagittal images of femoral heads
from T1-weighted MRI scans of 40 adult patients, with 51% showing evidence of
AVN, from early stages to quite severe (see Fig. 1). Often both femoral heads from
the same patient were selected (typically only one has AVN if the cause is fracturerelated while both sometimes have AVN if the cause is secondary to drug use),
and often two or three different cross-sectional slices of the same femoral head were
included (the appearance of AVN can change dramatically as one steps through
different cross-sectional slices). The images were digitized and 128x128 regions
centered on and just containing the femoral heads were manually selected. These
128x128 subimages with 256 gray levels were averaged down to 32x32 resolution
and to 16 gray levels for most of the trials (see Fig. 2).
The neural networks used to analyze the data were standard feed-forward, fullyconnected multilayer perceptrons with a single hidden layer of 4 to 30 nodes and 2
output nodes. The majority of the runs were with networks of 1024 input nodes,
into which the 32x32 images were placed, with gray levels scaled so the input values
ranged within +0.5. In other experiments with different input features the number of input nodes varied accordingly. Conjugate gradient optimization was used
for training (Kramer and Sangiovanni-Vincentelli, 1989; Barnard and Cole 1989).
Training was stopped at a maximum of 50 passes through the training set, though
usually convergence was achieved before this point. Each training run took less
than 1 minute on a SPARCstation 2.
Neural Network Diagnosis of Avascular Necrosis from Magnetic Resonance Images
Figure 1: Representative sagittal hip Tl weighted MR images. The small circular
area in the center of each picture is the femoral head (the ball joint at the upper
end of the femur). The top image shows a normal femoral head; the bottom is a
femoral head with severe avascular necrosis.
647
648
Manduca, Christy, and Ehman
Figure 2: Sample images from our 32x32 pixel, 16 gray level data set. The five
femoral heads in the right column are free of AVN, the five in the middle column
have varying degrees of AVN, while the left column shows five images that were
particularly difficult for both the networks and untrained humans to distinguish
(only the last two have AVN).
Neural Network Diagnosis of Avascular Necrosis from Magnetic Resonance Images
Table 1: Diagnostic Accuracies on Test Data
(averages over 24 and 100 runs respectively)
3
hidden nodes
50% training
80% training
none
4
5
6
7
8
10
30
91.6%
92.6%
93.2%
93.8%
93.2%
92.4%
92.4%
91.2%
92.6%
95.5%
96.4%
96.4%
97.0%
96.8%
96.1%
94.1%
RESULTS
Two sets of runs with the image data were made, with the data randomly split 50%50% and 80%-20% into training and test data sets respectively. In the first set , 4
different random splits of the data, with either half in turn serving as training or test
data, and 3 different random weight initializations each were used for a total of 24
distinct runs for each network configuration. For the other set, since there was less
test data, 10 different splits of the data with 10 different weight initializations each
were used for a total of 100 distinct runs for each network configuration . The results
are shown in Table 1. In all cases, the sensitivity and specificity were approximately
equal. Standard deviations of the averages shown were typically 4.0% for the 24
run values and 3.0% for the 100 run values.
The overall data set is linearly separable, and networks with no hidden nodes readily
achieved 100% on training data and better than 91% on test data. Networks with
2 or 3 hidden nodes were unable to converge on the training data much of the time ,
but with 4 hidden nodes convergence was restored and accuracy on test data was
improved over the linear case. This accuracy increased up to 6 or 7 hidden nodes,
and then began a gradual decrease as still more hidden nodes were added. This
may be related to overfitting of the training data with the extra degrees of freedom, leading to poorer generalization. Adding a second hidden layer also decreased
generalization accuracy.
Many other experiments were performed, using as inputs respectively: the 2-D FFT
of the images, the power spectrum, features extracted with a ring-wedge detector
in frequency space, the image data combined with each of the above, and multiple
slight translations of the training and/or test data. None of these yielded an improvement in accuracy over the above, and no approach to date with significantly
fewer than 1024 inputs maintained the high accuracies above. We are continuing
experiments on other forms of reducing the dimensionality of the input data. A few
experiments have been run with much larger networks , maintaining the full 128x128
resolution and 256 gray levels, but this also yields no improvement in the results .
649
650
Manduca, Christy, and Ehman
4
DISCUSSION
The networks' performance at the 50% training level was comparable to that of
humans with no training in radiology, who, supplied with the correct diagnosis
for half of the images, averaged 92.5% accuracy on the remaining half. When the
networks were trained on a larger set of data, their accuracy improved, to as high
as 97.0% when 80% of the data was used for training. We expect this performance
to continue to improve as larger data sets are collected.
It is difficult to compare the networks' performance to trained radiologists, who
can diagnose AVN with essentially 100% accuracy, but who look at multiple crosssectional images of far higher quality than our low-resolution, 16 gray-level data
set. When presented with single images from our data set, they typically make no
mistakes but set aside a few images as uncertain and strongly resist being forced
to commit to an answer on those. We are currently experimenting with networks
which can take inputs from multiple slices and which have an additional output
representing uncertainty.
We consider the 97% accuracy achieved here to be very encouraging for further
work on this problem and for the use of neural networks in more complex problems
in diagnostic radiology. This is perhaps a very natural field of application for neural
networks, since radiology resident training is essentially a four year experience with
a very large training set, and the American College of Radiology teaching file is a
classic example of a large collection of input/output training pairs (Boone et aI.,
1990). More complex diagnostic radiology problems may of course require fusing
information from multiple images or imaging modalities, clinical data, and medical
knowledge (perhaps as expert system rules). An especially intriguing possibility is
that sophisticated network based systems could someday be presented with images
which cannot currently be interpreted, supplied with the correct diagnosis as determined by other means, and learn to detect subtle distinctions in the images that
are not apparent to human radiologists.
References
Barnard, E. and Cole, R. (1989) "A neural-net training program based on conjugate
gradient optimization", Oregon Graduate Institute, Technical report CSE 89-014.
Boone, J. M., Sigillito, V. G. and Shaber, G. S. (1990), "Neural networks in radiology: An introduction and evaluation in a signal detection task", Medical Physics,
17, 234-241.
Kramer, A. and Sangiovanni-Vincentelli, A. (1989), "Efficient Parallel Learning
Algorithms for Neural Networks", in D. S. Touretzky (ed.) Advances in Neural
Information Processing Systems 1,40-48. Morgan-Kaufmann, San Mateo, CA.
Ternoven, O. et a1. (1990), "Prevalence of Asymptomatic, Clinically Occult Avascular Necrosis of the Hip in a Population at Risk", Radiology, 177(P), 104.
| 451 |@word trial:1 middle:1 mri:3 gradual:1 initial:1 configuration:2 yet:1 intriguing:1 readily:2 visible:1 shape:1 wanted:1 aside:1 half:3 selected:3 fewer:1 accordingly:1 cse:1 node:10 traverse:1 x128:3 five:3 supply:2 fullyconnected:1 expected:1 multi:1 little:1 actual:1 encouraging:1 ehman:4 interpreted:1 developed:1 sparcstation:1 scaled:1 medical:4 appear:1 t1:1 before:1 understood:1 mistake:1 approximately:1 chose:1 initialization:2 mateo:1 ecg:1 graduate:1 averaged:2 prevalence:1 avn:17 area:2 drug:2 physiology:1 significantly:1 specificity:1 cannot:1 risk:1 applying:1 center:1 starting:1 resolution:4 disorder:2 x32:3 rule:1 jackson:1 classic:1 population:1 play:1 rheumatoid:1 particularly:1 bottom:1 role:1 region:2 sangiovanni:2 decrease:1 disease:1 trained:3 joint:1 various:1 distinct:2 forced:1 detected:2 apparent:2 quite:1 larger:3 commit:1 radiology:11 net:3 took:1 date:1 convergence:2 ring:1 develop:1 sole:1 autoimmune:1 waveform:1 wedge:1 correct:2 centered:1 human:4 require:1 generalization:2 therapy:1 normal:1 vary:1 early:3 mayo:3 currently:2 cole:2 weighted:2 varying:1 improvement:2 experimenting:1 greatly:1 detect:1 typically:3 hidden:8 pixel:1 overall:1 resonance:7 spatial:1 field:2 equal:1 armando:1 manually:1 look:1 future:1 report:1 richard:1 serious:1 few:2 randomly:1 individual:1 freedom:1 detection:1 screening:1 circular:1 possibility:1 evaluation:1 severe:2 radiologist:2 poorer:1 worker:1 experience:2 continuing:1 desired:1 stopped:1 uncertain:1 hip:4 column:3 increased:1 fusing:1 deviation:1 answer:2 combined:1 sensitivity:1 physic:1 containing:2 expert:2 american:1 leading:1 oregon:1 performed:1 bone:2 diagnose:2 analyze:1 parallel:1 rochester:3 formed:1 accuracy:12 kaufmann:1 who:3 crippling:1 yield:1 ofthe:1 socket:1 none:2 tissue:1 detector:1 touretzky:1 ed:1 frequency:1 treatment:1 knowledge:1 dimensionality:1 subtle:1 sophisticated:1 feed:1 higher:1 supervised:1 methodology:1 improved:2 done:1 though:1 strongly:1 just:1 stage:3 asthma:1 resident:1 quality:1 perhaps:2 gray:7 effect:2 consisted:1 ranged:1 maintained:1 transplant:1 disruption:1 image:26 began:1 common:2 interpretation:1 slight:1 ai:1 teaching:1 continue:1 morgan:1 additional:1 mr:1 converge:1 signal:2 multiple:4 full:1 technical:1 clinical:2 cross:2 vincentelli:2 a1:1 biophysics:1 multilayer:1 patient:6 essentially:2 sometimes:1 arthritis:1 deterioration:1 achieved:3 decreased:1 modality:1 extra:1 pass:1 file:1 undergo:1 ideal:1 split:3 fft:1 fit:1 vascular:1 boone:2 cause:2 dramatically:1 supplied:2 diagnostic:8 serving:1 diagnosis:7 crosssectional:1 four:1 blood:2 imaging:3 year:1 run:9 uncertainty:1 comparable:1 layer:3 distinguish:1 yielded:1 occur:1 separable:1 ball:2 clinically:1 conjugate:3 increasingly:1 turn:1 manduca:4 end:2 fracture:1 available:1 magnetic:7 top:1 remaining:1 maintaining:1 especially:1 question:1 added:1 occurs:1 restored:1 gradient:3 unable:1 majority:1 extent:1 collected:1 water:1 difficult:2 potentially:2 upper:2 looking:1 head:13 digitized:1 varied:1 pair:1 resist:1 distinction:1 adult:2 usually:2 program:1 power:1 natural:2 mn:3 representing:1 improve:1 picture:1 loss:1 expect:1 clinic:3 sagittal:2 degree:2 translation:1 course:1 placed:1 last:1 free:1 side:1 perceptron:1 institute:1 slice:3 femur:2 forward:1 commonly:1 made:1 collection:1 san:1 far:1 overfitting:1 spectrum:1 table:2 learn:1 ca:1 eeg:1 pathogenesis:1 untrained:1 complex:3 marrow:1 linearly:1 paul:1 fig:2 representative:1 tl:1 down:1 minute:1 showing:1 undergoing:1 evidence:1 adding:1 subimages:1 explore:1 appearance:1 sectional:2 bo:1 extracted:1 kramer:2 barnard:2 change:1 included:1 determined:1 reducing:1 total:2 neck:1 secondary:2 perceptrons:1 college:1 scan:1 dept:3 |
3,878 | 4,510 | Reducing statistical time-series problems to binary
classification
J?er?emie Mary
SequeL-INRIA/LIFL-CNRS,
Universit?e de Lille, France
[email protected]
Daniil Ryabko
SequeL-INRIA/LIFL-CNRS,
Universit?e de Lille, France
[email protected]
Abstract
We show how binary classification methods developed to work on i.i.d. data can
be used for solving statistical problems that are seemingly unrelated to classification and concern highly-dependent time series. Specifically, the problems of
time-series clustering, homogeneity testing and the three-sample problem are addressed. The algorithms that we construct for solving these problems are based
on a new metric between time-series distributions, which can be evaluated using
binary classification methods. Universal consistency of the proposed algorithms
is proven under most general assumptions. The theoretical results are illustrated
with experiments on synthetic and real-world data.
1
Introduction
Binary classification is one of the most well-understood problems of machine learning and statistics:
a wealth of efficient classification algorithms has been developed and applied to a wide range of
applications. Perhaps one of the reasons for this is that binary classification is conceptually one of
the simplest statistical learning problems. It is thus natural to try and use it as a building block for
solving other, more complex, newer or just different problems; in other words, one can try to obtain
efficient algorithms for different learning problems by reducing them to binary classification. This
approach has been applied to many different problems, starting with multi-class classification, and
including regression and ranking [3, 16], to give just a few examples. However, all of these problems
are formulated in terms of independent and identically distributed (i.i.d.) samples. This is also the
assumption underlying the theoretical analysis of most of the classification algorithms.
In this work we consider learning problems that concern time-series data for which independence
assumptions do not hold. The series can exhibit arbitrary long-range dependence, and different timeseries samples may be interdependent as well. Moreover, the learning problems that we consider ?
the three-sample problem, time-series clustering, and homogeneity testing ? at first glance seem
completely unrelated to classification.
We show how the considered problems can be reduced to binary classification methods. The results
include asymptotically consistent algorithms, as well as finite-sample analysis. To establish the consistency of the suggested methods, for clustering and the three-sample problem the only assumption
that we make on the data is that the distributions generating the samples are stationary ergodic; this
is one of the weakest assumptions used in statistics. For homogeneity testing we have to make some
mixing assumptions in order to obtain consistency results (this is indeed unavoidable [22]). Mixing
conditions are also used to obtain finite-sample performance guarantees for the first two problems.
The proposed approach is based on a new distance between time-series distributions (that is, between probability distributions on the space of infinite sequences), which we call telescope distance.
This distance can be evaluated using binary classification methods, and its finite-sample estimates
are shown to be asymptotically consistent. Three main building blocks are used to construct the tele1
scope distance. The first one is a distance on finite-dimensional marginal distributions. The distance
we use for this is the following: dH (P, Q) := suph?H |EP h ? EQ h| where P, Q are distributions
and H is a set of functions. This distance can be estimated using binary classification methods,
and thus can be used to reduce various statistical problems to the classification problem. This distance was previously applied to such statistical problems as homogeneity testing and change-point
estimation [14]. However, these applications so far have only concerned i.i.d. data, whereas we
want to work with highly-dependent time series. Thus, the second building block are the recent
results of [1, 2], that show that empirical estimates of dH are consistent (under certain conditions
on H) for arbitrary stationary ergodic distributions. This, however, is not enough: evaluating dH
for (stationary ergodic) time-series distributions means measuring the distance between their finitedimensional marginals, and not the distributions themselves. Finally, the third step to construct the
distance is what we call telescoping. It consists in summing the distances for all the (infinitely many)
finite-dimensional marginals with decreasing weights.
We show that the resulting distance (telescope distance) indeed can be consistently estimated based
on sampling, for arbitrary stationary ergodic distributions. Further, we show how this fact can be
used to construct consistent algorithms for the considered problems on time series. Thus we can
harness binary classification methods to solve statistical learning problems concerning time series.
To illustrate the theoretical results in an experimental setting, we chose the problem of time-series
clustering, since it is a difficult unsupervised problem which seems most different from the problem of binary classification. Experiments on both synthetic and real-world data are provided. The
real-world setting concerns brain-computer interface (BCI) data, which is a notoriously challenging
application, and on which the presented algorithm demonstrates competitive performance.
A related approach to address the problems considered here, as well some related problems about
stationary ergodic time series, is based on (consistent) empirical estimates of the distributional distance, see [23, 21, 13] and [8] about the distributional distance. The empirical distance is based on
counting frequencies of bins of decreasing sizes and ?telescoping.? A similar telescoping trick is
used in different problems, e.g. sequence prediction [19]. Another related approach to time-series
analysis involves a different reduction, namely, that to data compression [20].
Organisation. Section 2 is preliminary. In Section 3 we introduce and discuss the telescope distance. Section 4 explains how this distance can be calculated using binary classification methods.
Sections 5 and 6 are devoted to the three-sample problem and clustering, respectively. In Section 7,
under some mixing conditions, we address the problems of homogeneity testing, clustering with
unknown k, and finite-sample performance guarantees. Section 8 presents experimental evaluation.
Some proofs are deferred to the supplementary material.
2
Notation and definitions
Let (X , FX ) be a measurable space (the domain). Time-series (or process) distributions are probability measures on the space (X N , FN ) of one-way infinite sequences (where FN is the Borel sigmaalgebra of X N ). We use the abbreviation X1..k for X1 , . . . , Xk . All sets and functions introduced
below (in particular, the sets Hk and their elements) are assumed measurable.
A distribution ? is stationary if ?(X1..k ? A) = ?(Xn+1..n+k ? A) for all A ? FX k , k, n ? N
k
(with FX k being
P the sigma-algebra of X ). A stationary distribution is called (stationary) ergodic
1
if limn?? n i=1..n?k+1 IXi..i+k ?A = ?(A) ?-a.s. for every A ? FX k , k ? N. (This definition,
which is more suited for the purposes of this work, is equivalent to the usual one expressed in terms
of invariant sets, see e.g. [8].)
3
A distance between time-series distributions
We start with a distance between distributions on X , and then we will extend it to distributions on
X ? . For two probability distributions P and Q on (X , F) and a set H of measurable functions on
X , one can define the distance
dH (P, Q) := sup |EP h ? EQ h|.
h?H
2
Special cases of this distance are Kolmogorov-Smirnov [15], Kantorovich-Rubinstein [11] and
Fortet-Mourier [7] metrics; the general case has been studied since at least [26].
We will be interested in the cases where dH (P, Q) = 0 implies P = Q. Note that in this case dH
is a metric (the rest of the properties are easy to see). For reasons that will become apparent shortly
(see Remark below), we will be mainly interested in the sets H that consist of indicator functions.
In this case we can identify each f ? H with the set {x : f (x) = 1} ? X and (by a slight abuse
of notation) write dH (P, Q) := suph?H |P (h) ? Q(h)|. It is easy to check that in this case dH is
a metric if and only if H generates F. The latter property is often easy to verify directly. First
of all, it trivially holds for the case where H is the set of halfspaces in a Euclidean X . It is also
easy to check that it holds if H is the set of halfspaces in the feature space of most commonly used
kernels (provided the feature space is of the same or higher dimension than the input space), such as
polynomial and Gaussian kernels.
Based on dH we can construct a distance between time-series probability distributions. For two
time-series distributions ?1 , ?2 we take the dH between k-dimensional marginal distributions of ?1
and ?2 for each k ? N, and sum them all up with decreasing weights.
Definition 1 (telescope distance D). For two time series distributions ?1 and ?2 on the space
(X ? , F? ) and a sequence of sets of functions H = (H1 , H2 , . . . ) define the telescope distance
?
X
DH (?1 , ?2 ) :=
wk sup |E?1 h(X1 , . . . , Xk ) ? E?2 h(Y1 , . . . , Yk )|,
(1)
k=1
h?Hk
where wk , k ? N is a sequence of positive summable real weights (e.g. wk = 1/k 2 ).
Lemma 1. DH is a metric if and only if dHk is a metric for every k ? N.
Proof. The statement follows from the fact that two process distributions are the same if and only if
all their finite-dimensional marginals coincide.
? For a pair of samples X1..n and Y1..m define empirDefinition 2 (empirical telescope distance D).
ical telescope distance as
? H (X1..n , Y1..m ) :=
D
min{m,n}
X
k=1
n?k+1
m?k+1
X
X
1
1
h(Xi..i+k?1 ) ?
h(Yi..i+k?1 ) . (2)
wk sup
m ? k + 1 i=1
h?Hk n ? k + 1 i=1
All the methods presented in this work are based on the empirical telescope distance. The key fact
is that it is an asymptotically consistent estimate of the telescope distance, that is, the latter can be
consistently estimated based on sampling.
Theorem 1. Let H = (H1 , H2 , . . . ), Hk ? X k , k ? N be a sequence of separable sets of indicator
functions of finite VC dimension such that Hk generates FX k . Then, for every stationary ergodic
time series distributions ?X and ?Y generating samples X1..n and Y1..m we have
? H (X1..n , Y1..m ) = DH (?X , ?Y )
lim D
(3)
n,m??
? H is a biased estimate of DH , and,
The proof is deferred to the supplementary material. Note that D
unlike in the i.i.d. case, the bias may depend on the distributions; however, the bias is o(n).
Remark. The condition that the sets Hk are sets of indicator function of finite VC dimension
comes from [2], where it is shown that for any stationary ergodic distribution ?, under these
Pn?k+1
1
conditions, suph?Hk n?k+1
h(Xi..i+k?1 ) is an asymptotically consistent estimate of
i=1
E? h(X1 , . . . , Xk ). This fact implies that dH can be consistently estimated, from which the theorem is derived.
4
? H using binary classification methods
Calculating D
? H . The main
The methods for solving various statistical problems that we suggest are all based on D
? H can be calculated using binary classification methods. Here we
appeal of this approach is that D
explain how to do it.
3
The definition (2) of DH involves calculating l summands (where l := min{n, m}), that is
n?k+1
m?k+1
X
X
1
1
sup
h(Xi..i+k?1 ) ?
h(Yi..i+k?1 )
n
?
k
+
1
m
?
k
+
1
h?Hk
i=1
(4)
i=1
for each k = 1..l. Assuming that h ? Hk are indicator functions, calculating each of the summands
amounts to solving the following k-dimensional binary classification problem. Consider Xi..i+k?1 ,
i = 1..n ? k + 1 as class-1 examples and Yi..i+k?1 , i = 1..m ? k + 1 as class-0 examples. The
supremum (4) is attained on h ? Hk that minimizes the empirical risk, with examples wighted with
respect to the sample size. Indeed, then we can define the weighted empirical risk of any h ? Hk as
n?k+1
m?k+1
X
X
1
1
(1 ? h(Xi..i+k?1 )) +
h(Yi..i+k?1 ) ,
n ? k + 1
m?k+1
i=1
i=1
which is obviously minimized by any h ? Hk that attains (4).
Thus, as long as we have a way to find h ? Hk that minimizes empirical risk, we have a consistent
estimate of DH (?X , ?Y ), under the mild conditions on H required by Theorem 1. Since the dimension of the resulting classification problems grows with the length of the sequences, one should
prefer methods that work in high dimensions, such as soft-margin SVMs [6].
A particularly remarkable feature is that the choice of Hk is much easier for the problems that we
consider than in the binary classification problem. Specifically, if (for some fixed k) the classifier
that achieves the minimal (Bayes) error for the classification problem is not in Hk , then obviously
the error of an empirical risk minimizer will not tend to zero, no matter how much data we have. In
? (and therefore, in the learning
contrast, all we need to achieve asymptotically 0 error in estimating D
problems considered below) is that the sets Hk asymptotically generate FX k and have a finite VC
dimension (for each k). This is the case already for the set of hyperplanes in Rk ! Thus, while the
choice of Hk (or, say, of the kernel to use in SVM) is still important from the practical point of view,
it is almost irrelevant for the theoretical consistency results. Thus, we have the following.
? H (X, Y )|, and thus the error of the algorithms
Claim 1. The approximation error |DH (P, Q) ? D
below, can be much smaller than the error of classification algorithms used to calculate DH (X, Y ).
Finally, we remark that while in (2) the number of summands is l, it can be replaced with any ?l such
that ?l ? ?, without affecting any asymptotic consistency results. A practically viable choice is
?l = log l; in fact, there is no reason to choose faster growing ?n since the estimates for higher-order
summands will not have enough data to converge. This is also the value we use in the experiments.
5
The three-sample problem
We start with a conceptually simple problem known in statistics as the three-sample problem (some
times also called time-series classification). We are given three samples X = (X1 , . . . , Xn ),
Y = (Y1 , . . . , Ym ) and Z = (Z1 , . . . , Zl ). It is known that X and Y were generated by different time-series distributions, whereas Z was generated by the same distribution as either X or Y . It
is required to find out which one is the case. Both distributions are assumed to be stationary ergodic,
but no further assumptions are made about them (no independence, mixing or memory assumptions). The three sample-problem for dependent time series has been addressed in [9] for Markov
processes and in [23] for stationary ergodic time series. The latter work uses an approach based on
the distributional distance.
Indeed, to solve this problem it suffices to have consistent estimates of some distance between time
series distributions. Thus, we can use the telescope distance. The following statement is a simple
corollary of Theorem 1.
Theorem 2. Let the samples X = (X1 , . . . , Xn ), Y = (Y1 , . . . , Ym ) and Z = (Z1 , . . . , Zl ) be
generated by stationary ergodic distributions ?X , ?Y and ?Z , with ?X 6= ?Y and either (i) ?Z = ?X
or (ii) ?Z = ?Y . Assume that the sets Hk ? X k , k ? N are separable sets of indicator functions of
? H (Z, X) ? D
? H (Z, Y )
finite VC dimension such that Hk generates FX k . A test that declares (i) if D
and (ii) otherwise makes only finitely many errors with probability 1 as n, m, l ? ?.
It is straightforward to extend this theorem to more than two classes; in other words, instead of X
and Y one can have an arbitrary number of samples from different stationary ergodic distributions.
4
6
Clustering time series
We are given N samples X 1 = (X11 , . . . , Xn11 ), . . . , X N = (X1N , . . . , XnNN ) generated by k different stationary ergodic time-series distributions ?1 , . . . , ?k . The number k is known, but the distributions are not. It is required to group the N samples into k groups (clusters), that is, to output
a partitioning of {X1 ..XN } into k sets. While there may be many different approaches to define
what is a good clustering (and, in general, deciding what is a good clustering is a difficult problem),
for the problem of classifying time-series samples there is a natural choice, proposed in [21]: those
samples should be put together that were generated by the same distribution. Thus, define target
clustering as the partitioning in which those and only those samples that were generated by the same
distribution are placed in the same cluster. A clustering algorithm is called asymptotically consistent
if with probability 1 there is an n0 such that the algorithm produces the target clustering whenever
maxi=1..N ni ? n0 .
Again, to solve this problem it is enough to have a metric between time-series distributions that can
?
be consistently estimated. Our approach here is based on the telescope distance, and thus we use D.
The clustering problem is relatively simple if the target clustering has what is called the strict separation property [4]: every two points in the same target cluster are closer to each other than to any
point from a different target cluster. The following statement is an easy corollary of Theorem 1.
Theorem 3. Assume that the sets Hk ? X k , k ? N are separable sets of indicator functions of finite
VC dimension, such that Hk generates FX k . If the distributions ?1 , . . . , ?k generating the samples
X 1 = (X11 , . . . , Xn11 ), . . . , X N = (X1N , . . . , XnNN ) are stationary ergodic, then with probability 1
from some n := maxi=1..N ni on the target clustering has the strict separation property with respect
? H.
to D
With the strict separation property at hand, it is easy to find asymptotically consistent algorithms.
We will give some simple examples, but the theorem below can be extended to many other distancebased clustering algorithms.
The average linkage algorithm works as follows. The distance between clusters is defined as the
average distance between points in these clusters. First, put each point into a separate cluster. Then,
merge the two closest clusters; repeat the last step until the total number of clusters is k. The farthest
point clustering works as follows. Assign c1 := X 1 to the first cluster. For i = 2..k, find the point
? H (X j , ct ) (to the points already assigned
X j , j ? {1..N } that maximizes the distance mint=1..i D
to clusters) and assign ci := X j to the cluster i. Then assign each of the remaining points to the
nearest cluster. The following statement is a corollary of Theorem 3.
Theorem 4. Under the conditions of Theorem 3, average linkage and farthest point clusterings are
asymptotically consistent.
Note that we do not require the samples to be independent; the joint distributions of the samples may
be completely arbitrary, as long as the marginal distribution of each sample is stationary ergodic.
These results can be extended to the online setting in the spirit of [13].
7
Speed of convergence
The results established so far are asymptotic out of necessity: they are established under the assumption that the distributions involved are stationary ergodic, which is too general to allow for
any meaningful finite-time performance guarantees. Moreover, some statistical problems, such as
homogeneity testing or clustering when the number of clusters is unknown, are provably impossible
to solve under this assumption [22].
While it is interesting to be able to establish consistency results under such general assumptions, it
is also interesting to see what results can be obtained under stronger assumptions. Moreover, since
it is usually not known in advance whether the data at hand satisfies given assumptions or not, it
appears important to have methods that have both asymptotic consistency in the general setting and
finite-time performance guarantees under stronger assumptions.
? under certain mixing conditions, and
In this section we will look at the speed of convergence of D
use it to construct solutions for the problems of homogeneity and clustering with an unknown num5
ber of clusters, as well as to establish finite-time performance guarantees for the methods presented
in the previous sections.
A stationary distribution on the space of one-way infinite sequences (X N , FN ) can be uniquely
extended to a stationary distribution on the space of two-way infinite sequences (X Z , FZ ) of the
form . . . , X?1 , X0 , X1 , . . . .
Definition 3 (?-mixing coefficients). For a process distribution ? define the mixing coefficients
?(?, k) :=
|?(A ? B) ? ?(A)?(B)|
sup
A??(X??..0 ),
B??(Xk..? )
where ?(..) denotes the sigma-algebra of the random variables in brackets.
When ?(?, k) ? 0 the process ? is called absolutely regular; this condition is much stronger than
ergodicity, but is much weaker than the i.i.d. assumption.
7.1
?
Speed of convergence of D
Assume that a sample X1..n is generated by a distribution ? that is uniformly ?-mixing with coefficients ?(?, k) Assume further that Hk is a set of indicator functions with a finite VC dimension dk ,
for each k ? N.
The general tool that we use to obtain performance guarantees in this section is the following bound
that can be obtained from the results of [12].
qn (?, Hk , ?) := ?
sup |
h?Hk
n?k+1
X
1
h(Xi..i+k?1 ) ? E?1 h(X1..k )| > ?
n ? k + 1 i=1
? n?(?, tn ? k) + 8tdnk +1 e?ln ?
2
/8
, (5)
where tn are any integers in 1..n and ln = n/tn . The parameters tn should be set according to the
values of ? in order to optimize the bound.
One can use similar bounds for classes of finite Pollard dimension [18] or more general bounds
expressed in terms of covering numbers, such as those given in [12]. Here we consider classes
of finite VC dimension only for the ease of the exposition and for the sake of continuity with the
previous section (where it was necessary).
Furthermore, for the rest of this section we assume
geometric ?-mixing distributions, that is,
?
?(?, t) ? ? t for some ? < 1. Letting ln = tn = n the bound (5) becomes
?
qn (?, Hk , ?) ? n?
n?k
+ 8n(dk +1)/2 e?
?
n?2 /8
.
(6)
Lemma 2. Let two samples X1..n and Y1..m be generated by stationary distributions ?X and ?Y
whose ?-mixing coefficients satisfy ?(?. , t) ? ? t for some ? < 1. Let Hk , k ? N be some sets of
indicator functions on X k whose VC dimension dk is finite and non-decreasing with k. Then
? H (X1..n , Y1..m ) ? DH (?X , ?Y )| > ?) ? 2?(?/4, n0 )
P (|D
(7)
where n0 := min{n1 , n2 }, the probability is with respect to ?X ? ?Y and
?(?, n) := ? log ?(n?
7.2
?
n+log(?)
+ 8n(d? log ? +1)/2 e?
?
n?2 /8
).
(8)
Homogeneity testing
Given two samples X1..n and Y1..m generated by distributions ?X and ?Y respectively, the problem
of homogeneity testing (or the two-sample problem) consists in deciding whether ?X = ?Y . A test
is called (asymptotically) consistent if its probability of error goes to zero as n0 := min{m, n} goes
to infinity. In general, for stationary ergodic time series distributions, there is no asymptotically
consistent test for homogeneity [22], so stronger assumptions are in order.
Homogeneity testing is one of the classical problems of mathematical statistics, and one of the most
studied ones. Vast literature exits on homogeneity testing for i.i.d. data, and for dependent processes
6
as well. We do not attempt to survey this literature here. Our contribution to this line of research is
to show that this problem can be reduced (via the telescope distance) to binary classification, in the
case of strongly dependent processes satisfying some mixing conditions.
It is easy to see that under the mixing conditions of Lemma 1 a consistent test for homogeneity exists,
and finite-sample performance guarantees can be obtained. It is enough to find a sequence ?n ? 0
such that ?(?n , n) ? 0 (see (8). Then the test can be constructed as follows: say that the two se? H (X1..n , Y1..m ) < ?min{n,m} ;
quences X1..n and Y1..m were generated by the same distribution if D
otherwise say that they were generated by different distributions. The following statement is an immediate consequence of Lemma 2.
Theorem 5. Under the conditions of Lemma 2 the probability of Type I error (the distributions are
the same but the test says they are different) of the described test is upper-bounded by 4?(?/8, n0 ).
The probability of Type II error (the distributions are different but the test says they are the same) is
upper-bounded by 4?(? ? ?/8, n0 ) where ? := 1/2DH (?X , ?Y ).
The optimal choice of ?n may depend on the speed at which dk (the VC dimension of Hk ) increases;
however, for most natural cases (recall that H
are also parameters of the algorithm) this growth is
?k 2
polynomial so the main term to control is e? n? /8 .
For example, if Hk is the set of halfspaces in X k = Rk then dk = k + 1 and one can chose
?n := n?1/8 . The resulting probability of Type I error decreases as exp(?n1/4 ).
7.3
Clustering with a known or unknown number of clusters
If the distributions generating the samples satisfy certain mixing conditions, then we can augment
Theorems 3 and 4 with finite-sample performance guarantees.
Theorem 6. Let the distributions ?1 , . . . , ?k generating the samples X 1
=
(X11 , . . . , Xn11 ), . . . , X N = (X1N , . . . , XnNN ) satisfy the conditions of Lemma 2. Define
? := mini,j=1..N,i6=j DH (?i , ?j ) and n := mini=1..N ni . Then with probability at least
1 ? N (N ? 1)?(?/4, n)/2
the target clustering of the samples has the strict separation property. In this case single linkage
and farthest point algorithms output the target clustering.
Proof. Note that a sufficient condition for the strict separation property to hold is that for every one
? H (X i , X j ) i, j = 1..N is within ?/4 of the DH
out of N (N ? 1)/2 pairs of samples the estimate D
distance between the corresponding distributions. It remains to apply Lemma 2 to obtain the first
statement, and the second statement is obvious (cf. Theorem 4).
As with homogeneity testing, while in the general case of stationary ergodic distributions it is impossible to have a consistent clustering algorithm when the number of clusters k is unknown, the
situation changes if the distributions satisfy certain mixing conditions. In this case a consistent clustering algorithm can be obtained as follows. Assign to the same cluster all samples that are at most
?n -far from each other, where the threshold ?n is selected the same way as for homogeneity testing:
?n ? 0 and ?(?n , n) ? 0. The optimal choice of this parameter depends on the choice of Hk
through the speed of growth of the VC dimension dk of these sets.
Theorem 7. Given N samples generated by k different stationary distributions ?i , i = 1..k (unknown k) all satisfying the conditions of Lemma 2, the probability of error (misclustering at least
one sample) of the described algorithm is upper-bounded by
2N (N ? 1) max{?(?/8, n), ?(? ? ?/8, n)}
where ? := mini,j=1..k,i6=j DH (?i , ?j ) and n = mini=1..N ni , with ni , i = 1..N being lengths of
the samples.
8
Experiments
For experimental evaluation we chose the problem of time-series clustering. Average-linkage clustering is used, with the telescope distance between samples calculated using an SVM, as described
7
in Section 4. In all experiments, SVM is used with radial basis kernel, with default parameters of
libsvm [5].
8.1
Synthetic data
For the artificial setting we have chosen highly-dependent time series distributions which have the
same single-dimensional marginals and which cannot be well approximated by finite- or countablestate models. The distributions ?(?), ? ? (0, 1), are constructed as follows. Select r0 ? [0, 1]
uniformly at random; then, for each i = 1..n obtain ri by shifting ri?1 by ? to the right, and
removing the integer part. The time series (X1 , X2 , . . . ) is then obtained from ri by drawing a point
from a distribution law N1 if ri < 0.5 and from N2 otherwise. N1 is a 3-dimensional Gaussian with
mean of 0 and covariance matrix Id ?1/4. N2 is the same but with mean 1. If ? is irrational1 then the
distribution ?(?) is stationary ergodic, but does not belong to any simpler natural distribution family
[25]. The single-dimensional marginal is the same for all values of ?. The latter two properties
make all parametric and most non-parametric methods inapplicable to this problem.
In our experiments, we use two process distributions ?(?i ), i ? {1, 2}, with ?1 = 0.31..., ?2 =
0.35...,. The dependence of error rate on the length of time series is shown on Figure 1. One
clustering experiment on sequences of length 1000 takes about 5 min. on a standard laptop.
8.2
Real data
To demonstrate the applicability of the proposed methods to realistic scenarios, we chose the braincomputer interface data from BCI competition III [17]. The dataset consists of (pre-processed)
BCI recordings of mental imagery: a person is thinking about one of three subjects (left foot, right
foot, a random letter). Originally, each time series consisted of several consecutive sequences of
different classes, and the problem was supervised: three time series for training and one for testing.
We split each of the original time series into classes, and then used our clustering algorithm in a
completely unsupervised setting. The original problem is 96-dimensional, but we used only the first
3 dimensions (using all 96 gives worse performance). The typical sequence length is 300. The
performance is reported in Table 1, labeled TSSVM . All the computation for this experiment takes
approximately 6 minutes on a standard laptop.
0.3
0.1
0.2
TSSVM
DTW
KCpA
SVM
0.0
Error rate
0.4
The following methods were used for comparison. First, we used dynamic time wrapping (DTW)
[24] which is a popular base-line approach for time-series clustering. The other two methods in
Table 1 are from [10]. The comparison is not fully relevant, since the results in [10] are for different
settings; the method KCpA was used in change-point estimation method (a different but also unsupervised setting), and SVM was used in a supervised setting. The latter is of particular interest
since the classification method we used in the telescope distance is also SVM, but our setting is
unsupervised (clustering).
0
200
400
600
800
1000
s1
84%
46%
79%
76%
s2
81%
41%
74%
69%
s3
61%
36%
61%
60%
1200
Time of observation
Table 1: Clustering accuracy in the BCI
dataset. 3 subjects (columns), 4 methods
Figure 1: Error of two-class clustering using
(rows). Our method is TSSVM .
TSSVM ; 10 time series in each target cluster, averaged over 20 runs.
Acknowledgments. This research was funded by the Ministry of Higher Education and Research, Nord-Pasde-Calais Regional Council and FEDER (Contrat de Projets Etat Region CPER 2007-2013), ANR projects
EXPLO-RA (ANR-08-COSI-004), Lampada (ANR-09-EMER-007) and CoAdapt, and by the European Community?s FP7 Program under grant agreements n? 216886 (PASCAL2) and n? 270327 (CompLACS).
1
in the experiments simulated by a longdouble with a long mantissa
8
References
[1] T. M. Adams and A. B. Nobel. Uniform convergence of Vapnik-Chervonenkis classes under
ergodic sampling. The Annals of Probability, 38:1345?1367, 2010.
[2] T. M. Adams and A. B. Nobel. Uniform approximation of Vapnik-Chervonenkis classes.
Bernoulli, 18(4):1310?1319, 2012.
[3] M.-F. Balcan, N. Bansal, A. Beygelzimer, D. Coppersmith, J. Langford, and G. Sorkin. Robust
reductions from ranking to classification. In COLT?07, v. 4539 of LNCS, pages 604?619. 2007.
[4] M.F. Balcan, A. Blum, and S. Vempala. A discriminative framework for clustering via similarity functions. In STOC, pp. 671?680. ACM, 2008.
[5] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM
Transactions on Intelligent Systems and Technology, 2:27:1?27:27, 2011. Software available
at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[6] C. Cortes and V. Vapnik. Support-vector networks. Mach. Learn., 20(3):273?297, 1995.
[7] R. Fortet and E. Mourier. Convergence de la r?epartition empirique vers la r?epartition
th?eoretique. Ann. Sci. Ec. Norm. Super., III. Ser, 70(3):267?285, 1953.
[8] R. Gray. Probability, Random Processes, and Ergodic Properties. Springer Verlag, 1988.
[9] M. Gutman. Asymptotically optimal classification for multiple tests with empirically observed
statistics. IEEE Transactions on Information Theory, 35(2):402?408, 1989.
[10] Z. Harchaoui, F. Bach, E. Moulines. Kernel change-point analysis. NIPS, pp. 609?616, 2008.
[11] L. V. Kantorovich and G. S. Rubinstein. On a function space in certain extremal problems.
Dokl. Akad. Nauk USSR, 115(6):1058?1061, 1957.
[12] R.L. Karandikara and M. Vidyasagar. Rates of uniform convergence of empirical means with
mixing processes. Statistics and Probability Letters, 58:297?307, 2002.
[13] A. Khaleghi, D. Ryabko, J. Mary, and P. Preux. Online clustering of processes. In AISTATS,
JMLR W&CP 22, pages 601?609, 2012.
[14] D. Kifer, S. Ben-David, J. Gehrke. Detecting change in data streams. VLDB (v.30): 180?191,
2004.
[15] A.N. Kolmogorov. Sulla determinazione empirica di una legge di distribuzione. G. Inst. Ital.
Attuari, pages 83?91, 1933.
[16] John Langford, Roberto Oliveira, and Bianca Zadrozny. Predicting conditional quantiles via
reduction to classification. In UAI, 2006.
[17] Jos?e del R. Mill?an. On the need for on-line learning in brain-computer interfaces. In Proc. of
the Int. Joint Conf. on Neural Networks, 2004.
[18] D. Pollard. Convergence of Stochastic Processes. Springer, 1984.
[19] B. Ryabko. Prediction of random sequences and universal coding. Problems of Information
Transmission, 24:87?96, 1988.
[20] B. Ryabko. Compression-based methods for nonparametric prediction and estimation of some
characteristics of time series. IEEE Transactions on Information Theory, 55:4309?4315, 2009.
[21] D. Ryabko. Clustering processes. In Proc. ICML 2010, pp. 919?926, Haifa, Israel, 2010.
[22] D. Ryabko. Discrimination between B-processes is impossible. Journal of Theoretical Probability, 23(2):565?575, 2010.
[23] D. Ryabko and B. Ryabko. Nonparametric statistical inference for ergodic processes. IEEE
Transactions on Information Theory, 56(3):1430?1435, 2010.
[24] H. Sakoe and S. Chiba. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech and Signal Processing, 26(1):43?49, 1978.
[25] P. Shields. The Ergodic Theory of Discrete Sample Paths. AMS Bookstore, 1996.
[26] V. M. Zolotarev. Metric distances in spaces of random variables and their distributions. Math.
USSR-Sb, 30(3):373?401, 1976.
9
| 4510 |@word mild:1 polynomial:2 compression:2 seems:1 smirnov:1 stronger:4 norm:1 vldb:1 covariance:1 reduction:3 necessity:1 series:41 chervonenkis:2 beygelzimer:1 john:1 fn:3 realistic:1 n0:7 discrimination:1 stationary:25 selected:1 xk:4 mental:1 detecting:1 math:1 hyperplanes:1 simpler:1 mathematical:1 constructed:2 become:1 viable:1 consists:3 introduce:1 sakoe:1 x0:1 ra:1 indeed:4 themselves:1 growing:1 multi:1 brain:2 moulines:1 decreasing:4 becomes:1 provided:2 estimating:1 unrelated:2 underlying:1 moreover:3 notation:2 maximizes:1 bounded:3 what:5 laptop:2 israel:1 minimizes:2 developed:2 spoken:1 guarantee:8 every:5 growth:2 universit:2 demonstrates:1 classifier:1 ser:1 zl:2 partitioning:2 farthest:3 control:1 grant:1 positive:1 understood:1 consequence:1 mach:1 id:1 path:1 abuse:1 merge:1 inria:3 chose:4 approximately:1 studied:2 challenging:1 ease:1 range:2 averaged:1 practical:1 acknowledgment:1 testing:13 block:3 lncs:1 universal:2 empirical:10 word:3 radial:1 regular:1 pre:1 suggest:1 cannot:1 put:2 risk:4 impossible:3 optimize:1 measurable:3 equivalent:1 www:1 straightforward:1 go:2 starting:1 ergodic:23 survey:1 fx:8 annals:1 target:9 programming:1 ixi:1 us:1 agreement:1 trick:1 element:1 distancebased:1 satisfying:2 particularly:1 approximated:1 recognition:1 gutman:1 distributional:3 labeled:1 ep:2 csie:1 observed:1 calculate:1 region:1 ryabko:9 decrease:1 halfspaces:3 yk:1 legge:1 dynamic:2 depend:2 solving:5 contrat:1 algebra:2 inapplicable:1 exit:1 completely:3 basis:1 joint:2 various:2 kolmogorov:2 cper:1 artificial:1 rubinstein:2 apparent:1 whose:2 supplementary:2 solve:4 say:5 drawing:1 otherwise:3 anr:3 bci:4 statistic:6 zolotarev:1 seemingly:1 obviously:2 online:2 sequence:14 net:1 fr:1 relevant:1 mixing:15 achieve:1 nauk:1 competition:1 convergence:7 cluster:19 transmission:1 produce:1 generating:5 adam:2 ben:1 illustrate:1 misclustering:1 nearest:1 finitely:1 eq:2 una:1 involves:2 implies:2 come:1 empirica:1 foot:2 stochastic:1 vc:10 material:2 bin:1 explains:1 require:1 education:1 assign:4 suffices:1 preliminary:1 ntu:1 hold:4 practically:1 considered:4 deciding:2 exp:1 scope:1 claim:1 achieves:1 consecutive:1 purpose:1 estimation:3 proc:2 dhk:1 calais:1 extremal:1 council:1 gehrke:1 tool:1 weighted:1 gaussian:2 super:1 pn:1 corollary:3 derived:1 consistently:4 bernoulli:1 check:2 mainly:1 hk:29 contrast:1 attains:1 am:1 inst:1 inference:1 dependent:6 cnrs:2 sb:1 ical:1 france:2 interested:2 provably:1 x11:3 classification:31 colt:1 augment:1 ussr:2 special:1 marginal:4 construct:6 sampling:3 lille:2 look:1 unsupervised:4 icml:1 thinking:1 minimized:1 intelligent:1 few:1 homogeneity:15 replaced:1 n1:4 attempt:1 interest:1 highly:3 evaluation:2 deferred:2 bracket:1 devoted:1 closer:1 necessary:1 euclidean:1 haifa:1 theoretical:5 minimal:1 column:1 soft:1 measuring:1 applicability:1 uniform:3 ital:1 daniil:2 too:1 reported:1 synthetic:3 person:1 sequel:2 jos:1 complacs:1 ym:2 together:1 again:1 imagery:1 unavoidable:1 choose:1 summable:1 worse:1 conf:1 chung:1 de:4 coding:1 wk:4 coefficient:4 matter:1 int:1 satisfy:4 ranking:2 depends:1 stream:1 try:2 h1:2 view:1 sup:6 competitive:1 start:2 bayes:1 contribution:1 ni:5 accuracy:1 characteristic:1 identify:1 conceptually:2 etat:1 notoriously:1 explain:1 whenever:1 definition:5 frequency:1 involved:1 pp:3 obvious:1 proof:4 di:2 dataset:2 jeremie:1 popular:1 mantissa:1 recall:1 lim:1 appears:1 higher:3 attained:1 originally:1 supervised:2 harness:1 evaluated:2 cosi:1 strongly:1 furthermore:1 just:2 ergodicity:1 until:1 langford:2 hand:2 glance:1 del:1 continuity:1 perhaps:1 gray:1 grows:1 mary:3 building:3 verify:1 consisted:1 assigned:1 illustrated:1 uniquely:1 covering:1 x1n:3 bansal:1 demonstrate:1 tn:5 cp:1 interface:3 balcan:2 empirically:1 quences:1 extend:2 slight:1 belong:1 marginals:4 consistency:7 trivially:1 i6:2 funded:1 similarity:1 summands:4 base:1 closest:1 recent:1 irrelevant:1 mint:1 scenario:1 certain:5 verlag:1 binary:17 yi:4 ministry:1 r0:1 converge:1 signal:1 ii:3 multiple:1 harchaoui:1 faster:1 bach:1 long:4 lin:1 concerning:1 prediction:3 regression:1 determinazione:1 metric:8 kernel:5 c1:1 whereas:2 want:1 affecting:1 addressed:2 wealth:1 limn:1 biased:1 rest:2 unlike:1 regional:1 strict:5 recording:1 tend:1 subject:2 spirit:1 seem:1 call:2 integer:2 counting:1 iii:2 identically:1 concerned:1 enough:4 easy:7 independence:2 split:1 sorkin:1 reduce:1 whether:2 feder:1 linkage:4 pollard:2 speech:1 remark:3 se:1 amount:1 nonparametric:2 oliveira:1 svms:1 processed:1 simplest:1 reduced:2 telescope:14 generate:1 fz:1 http:1 s3:1 estimated:5 write:1 discrete:1 group:2 key:1 threshold:1 blum:1 libsvm:3 vast:1 asymptotically:12 sum:1 run:1 letter:2 almost:1 family:1 chih:2 separation:5 prefer:1 bound:5 ct:1 declares:1 infinity:1 ri:4 x2:1 software:1 sake:1 generates:4 speed:5 min:6 separable:3 vempala:1 relatively:1 fortet:2 according:1 project:1 smaller:1 newer:1 tw:1 s1:1 invariant:1 ln:3 previously:1 remains:1 discus:1 cjlin:1 letting:1 fp7:1 kifer:1 available:1 apply:1 shortly:1 original:2 denotes:1 clustering:36 include:1 remaining:1 cf:1 calculating:3 establish:3 classical:1 already:2 wrapping:1 parametric:2 dependence:2 usual:1 kantorovich:2 exhibit:1 distance:41 separate:1 simulated:1 sci:1 epartition:2 reason:3 nobel:2 assuming:1 length:5 mini:4 akad:1 difficult:2 statement:7 stoc:1 nord:1 sigma:2 unknown:6 upper:3 observation:1 markov:1 finite:22 timeseries:1 projets:1 immediate:1 situation:1 extended:3 zadrozny:1 emer:1 y1:12 arbitrary:5 community:1 introduced:1 lampada:1 namely:1 pair:2 required:3 david:1 z1:2 acoustic:1 established:2 nip:1 address:2 able:1 suggested:1 dokl:1 below:5 usually:1 emie:1 coppersmith:1 program:1 preux:1 including:1 memory:1 max:1 shifting:1 pascal2:1 vidyasagar:1 natural:4 braincomputer:1 predicting:1 indicator:8 telescoping:3 technology:1 library:1 dtw:2 sulla:1 roberto:1 interdependent:1 geometric:1 literature:2 asymptotic:3 law:1 fully:1 interesting:2 suph:3 proven:1 remarkable:1 h2:2 sufficient:1 consistent:17 classifying:1 row:1 placed:1 repeat:1 last:1 bias:2 allow:1 weaker:1 ber:1 wide:1 distributed:1 chiba:1 calculated:3 finitedimensional:1 world:3 evaluating:1 xn:4 dimension:15 qn:2 default:1 commonly:1 made:1 coincide:1 far:3 ec:1 transaction:5 supremum:1 kcpa:2 uai:1 vers:1 summing:1 assumed:2 pasde:1 xi:6 discriminative:1 table:3 learn:1 mourier:2 robust:1 complex:1 european:1 domain:1 aistats:1 main:3 s2:1 n2:3 x1:21 quantiles:1 borel:1 bianca:1 shield:1 jmlr:1 third:1 theorem:17 rk:2 removing:1 minute:1 jen:1 bookstore:1 khaleghi:1 attuari:1 er:1 maxi:2 appeal:1 dk:6 svm:6 cortes:1 concern:3 weakest:1 organisation:1 consist:1 exists:1 vapnik:3 lifl:2 ci:1 margin:1 easier:1 suited:1 mill:1 infinitely:1 expressed:2 chang:1 springer:2 minimizer:1 satisfies:1 dh:24 acm:2 abbreviation:1 conditional:1 formulated:1 ann:1 exposition:1 change:5 specifically:2 infinite:4 reducing:2 uniformly:2 typical:1 lemma:8 called:6 total:1 experimental:3 la:2 meaningful:1 explo:1 select:1 support:2 latter:5 absolutely:1 |
3,879 | 4,511 | On Lifting the Gibbs Sampling Algorithm
Vibhav Gogate
Department of Computer Science
The University of Texas at Dallas
Richardson, TX, 75080, USA
[email protected]
Deepak Venugopal
Department of Computer Science
The University of Texas at Dallas
Richardson, TX, 75080, USA
[email protected]
Abstract
First-order probabilistic models combine the power of first-order logic, the de
facto tool for handling relational structure, with probabilistic graphical models,
the de facto tool for handling uncertainty. Lifted probabilistic inference algorithms
for them have been the subject of much recent research. The main idea in these
algorithms is to improve the accuracy and scalability of existing graphical models?
inference algorithms by exploiting symmetry in the first-order representation. In
this paper, we consider blocked Gibbs sampling, an advanced MCMC scheme,
and lift it to the first-order level. We propose to achieve this by partitioning the
first-order atoms in the model into a set of disjoint clusters such that exact lifted
inference is polynomial in each cluster given an assignment to all other atoms not
in the cluster. We propose an approach for constructing the clusters and show how
it can be used to trade accuracy with computational complexity in a principled
manner. Our experimental evaluation shows that lifted Gibbs sampling is superior
to the propositional algorithm in terms of accuracy, scalability and convergence.
1 Introduction
Modeling large, complex, real-world domains requires the ability to handle both rich relational structure and large amount of uncertainty. Unfortunately, the two existing representation and reasoning
tools of choice ? probabilistic graphical models (PGMs) and first-order logic ? are unable to effectively handle both. PGMs can compactly represent and reason about uncertainty. However, they are
propositional and thus ill-equipped to handle relational structure. First-order logic can effectively
handle relational structure. However, it has no representation for uncertainty. Therefore, combining the representation and reasoning power of first-order logic with PGMs is a worthwhile goal.
Statistical relational learning (SRL) [7] is an emerging field which attempts to do just that.
The key task in SRL is inference - the problem of answering a query given an SRL model. In principle, we can simply ground (propositionalize) the given SRL model to yield a PGM and thereby
solve the inference problem in SRL by reducing it to inference over PGMs. This approach is problematic and impractical, however, because the PGMs obtained by grounding a SRL model can be
substantially large, having millions of variables and billions of features; existing inference algorithms for PGMs are unable to handle problems at this scale. An alternative approach, which has
gained prominence since the work of Poole [25] is lifted or first-order inference. The main idea,
which is similar to theorem proving in first-order logic, is to take a propositional inference algorithm and exploit symmetry in its execution by performing inference over a group of identical or
interchangeable random variables. The algorithms are called lifted algorithms because they identify
symmetry by consulting the first-order representation without grounding the model.
Several lifted algorithms have been proposed to date. Prominent exact algorithms are first-order
variable elimination [25] and its extensions [2, 23], which lift the variable elimination algorithm, and
probabilistic theorem proving (PTP) [8] which lifts the weighted model counting algorithm [1, 29].
Notable approximate inference algorithms are lifted Belief propagation [30] and lifted importance
sampling [8, 9], which lift belief propagation [20] and importance sampling respectively.
1
In this paper, we lift blocked Gibbs sampling, an advanced MCMC technique. Blocked Gibbs
sampling improves upon the Gibbs sampling algorithm by grouping variables (each group is called
a block) and then jointly sampling all variables in the block [10, 16]. Blocking improves the mixing
time and as a result improves both the accuracy and convergence of Gibbs sampling. The difficulty
is that to jointly sample variables in a block, we need to compute a joint distribution over them. This
is typically exponential in the treewidth of the ground network projected on the block.
Several earlier papers have attempted to exploit relational or first-order structure in MCMC sampling. Notable examples are lazy MC-SAT [27], Metropolis-Hastings MCMC for Bayesian logic
(BLOG) [18], typed MCMC [14] and orbital MCMC [21]. Unfortunately, none of the aforementioned techniques are truly lifted. In particular, they do not exploit first-order structure to the fullest
extent. In fact, lifting a generic MCMC technique is difficult because at each point in order to ensure
convergence to the desired stationary distribution one has to maintain an assignment to all random
variables in the ground network. We circumvent these issues by lifting the blocked Gibbs sampling
algorithm, which as we show is more amenable to lifting.
Our main idea in applying the blocking approach to SRL models is to partition the set of first-order
atoms in the model into disjoint clusters such that PTP (an exact lifted inference scheme) is feasible
in each cluster given an assignment to all other atoms not in the cluster. Given such a set of clusters,
we show that Gibbs sampling is essentially a message passing algorithm over the cluster graph
formed by connecting clusters that have atoms that are in the Markov blanket of each other. Each
message from a sender to a receiving cluster is a truth assignment to all ground atoms that are in the
Markov blanket of the receiving cluster. We show how to store this message compactly by taking
advantage of the first-order representation yielding a lifted MCMC algorithm.
We present experimental results comparing the performance of lifted blocked Gibbs sampling with
(propositional) blocked Gibbs sampling, MC-SAT [26, 27] and Lifted BP [30] on various benchmark SRL models. Our experiments show that lifted Gibbs sampling is superior to blocked Gibbs
sampling and MC-SAT in terms of convergence, accuracy and scalability. It is also more accurate
than lifted BP on some instances.
2 Notation and Preliminaries
In this section, we describe notation and preliminaries on propositional logic, first-order logic,
Markov logic networks and Gibbs sampling. For more details, refer to [3, 13, 15].
The language of propositional logic consists of atomic sentences called propositions or atoms, and
logical connectives such as ? (conjunction), ? (disjunction), ? (negation), ? (implication) and ?
(equivalence). Each proposition takes values from the binary domain {False, True} (or {0, 1}).
A propositional formula f is an atom, or any complex formula that can be constructed from atoms
using logical connectives. For example, A, B and C are propositional atoms and f = A ? ?B ? C is a
propositional formula. A knowledge base (KB) is a set of formulas. A world is a truth assignment
to all atoms in the KB.
First-order logic (FOL) generalizes propositional logic by allowing atoms to have internal structure;
an atom in FOL is a predicate that represents relations between objects. A predicate consists of a
predicate symbol, denoted by Monospace fonts, e.g., Friends, Smokes, etc., followed by a parenthesized list of arguments called terms. A term is a logical variable, denoted by lower case letters
such as x, y, z, etc., or a constant, denoted by upper case letters such as X, Y , Z, etc. We assume
that each logical variable, e.g., x is typed and takes values over a finite set ?x . The language of FOL
also includes two quantifiers: ? (universal) and ? (existential) which express properties of an entire
collection of objects. A formula in first order logic is a predicate (atom), or any complex sentence
that can be constructed from atoms using logical connectives and quantifiers. For example, the formula ?x Smokes(x) ? Asthma(x) states that all persons who smoke have asthma. ?x Cancer(x)
states that there exists a person x who has cancer. A first-order KB is a set of first-order formulas.
In this paper, we use a subset of FOL which has no function symbols, equality constraints or existential quantifiers. We also assume that domains are finite (and therefore function-free) and that there is
a one-to-one mapping between constants and objects in the domain (Herbrand interpretations). We
assume that each formula f is of the form ?x f , where x are the set of variables in f and f is a
conjunction or disjunction of literals; each literal being an atom or its negation. For brevity, we will
drop ? from all the formulas. Given variables x = {x1 , . . . , xn } and constants X = {X1 , . . . , Xn }
2
where Xi ? ?xi , f [X/x] is obtained by substituting every occurrence of variable xi in f with Xi .
A ground formula is a formula obtained by substituting all of its variables with a constant. A ground
KB is a KB containing all possible groundings of all of its formulas. For example, the grounding
of a KB containing one formula, Smokes(x) ? Asthma(x) where ?x = {Ana, Bob}, is a KB
containing two formulas: Smokes(Ana) ? Asthma(Ana) and Smokes(Bob) ? Asthma(Bob). A
world in FOL is a truth assignment to all atoms in its grounding.
Markov logic [3] extends FOL by softening the hard constraints expressed by the formulas and
is arguably the most popular modeling language for SRL. A soft formula or a weighted formula
is a pair (f, w) where f is a formula in FOL and w is a real-number. A Markov logic network
(MLN), denoted by M, is a set of weighted formulas (fi , wi ). Given a set of constants that represent
objects in the domain, a Markov logic network defines a Markov network or a log-linear model. The
Markov network is obtained by grounding the weighted first-order knowledge base and represents
the following probability distribution.
!
X
1
PM (?) =
wi N (fi , ?)
(1)
exp
Z(M)
i
where ? is a world, N (fi , ?) is the number of groundings of fi that evaluate to True in the world
? and Z(M) is a normalization constant or the partition function.
In this paper, we assume that the input MLN to our algorithm is in normal form [11, 19]. We
require this for simplicity of exposition. Our main algorithm can be easily modified to work with
other canonical forms such as parfactors [25] and first order CNFs with substitution constraints [8].
However, its specification becomes much more complicated and messy. A normal MLN [11] is an
MLN that satisfies the following two properties: (1) There are no constants in any formula, and (2)
If two distinct atoms with the same predicate symbol have variables x and y in the same position
then ?x = ?y . Note that in a normal MLN, we assume that the terms in each atom are ordered and
therefore we can identify each term by its position in the order.
2.1 Gibbs Sampling and Blocking
Given an MLN, a set of query atoms and evidence, we can adapt the basic (propositional) Gibbs sampling [6] algorithm for computing the marginal probabilities of query atoms given evidence as follows. First, we ground all the formulas in the MLN, yielding a Markov network. Second, we instantiate all the evidence atoms in the network. Assume that the resulting evidence-instantiated network
is defined over a set of variables X. Third, we generate N samples (?
x(1) , . . . , x
?(N ) ) (a sample is a
truth assignment to all random variables in the Markov network) as follows. We begin with a random
assignment to all variables, yielding x
?(0) . Then for t = 1, . . . , N , we perform the following steps.
Let (X1 , . . . , Xn ) be an arbitrary ordering of variables in X. Then, for i = 1 to n, we generate a new
(t?1)
(t?1)
(t)
?n
).
xt1 , . . . , x?ti?1 , x?i+1 , . . . , x
value x
?i for Xi by sampling a value from the distribution P (Xi |?
(This is often called systematic scan Gibbs sampling. An alternative approach is random scan Gibbs
sampling which often converges faster than systematic scan Gibbs sampling). For conciseness, we
(t?1)
(t?1)
(t)
?n
). Once the required N samples
?ti?1 , x
?i+1 , . . . , x
xt1 , . . . , x
will write P (Xi |?
x?i ) = P (Xi |?
are generated, we can use them to answer any query over the model. In particular, the marginal
probability for each variable can be estimated by averaging the conditional marginals:
N
1 X
(t)
Pb(?
xi ) =
P (?
xi |?
x?i )
N t=1
(t)
(t)
x?i,MB(Xi ) ) where M B(Xi ) is the Markov
Note that in Markov networks, P (Xi |?
x?i ) = P (Xi |?
(t)
Blanket (the set of variables that share a function with Xi ) of Xi and x
??i,MB(Xi ) is the projection
(t)
of x
??i on M B(Xi ).
The sampling distribution of Gibbs sampling converges to the posterior distribution (the distribution associated with the evidence instantiated Markov network) as the number of samples increases
because the resulting Markov chain is guaranteed to be aperiodic and ergodic (see [15] for details).
The main idea in blocked Gibbs sampling [10] is grouping variables to form a block, and then
jointly sampling all variables in a block given an assignment to all other variables not in the block.
3
Blocking improves mixing yielding a more accurate sampling algorithm [15]. However, the computational complexity of jointly sampling all variables in a block typically increases with the treewidth
of the Markov network projected on the block. Thus, in practice, given time and memory resource
constraints, the main issue in blocked Gibbs sampling is finding the right balance between computational complexity and accuracy.
3 Our Approach
We illustrate the key ideas in our approach using an example MLN having two weighted formulas:
R(x, y) ? S(y, z), w1 and S(y, z) ? T(z, u), w2 . Note that the problem of computing the partition
function of this MLN for arbitrary domain sizes is non-trivial; it cannot be polynomially solved
using existing exact lifted approaches such as PTP [8] and lifted VE [2].
Our main idea is to partition the set of atoms into disjoint blocks (clusters) such that PTP is polynomial in each cluster and then sample all atoms in the cluster jointly. PTP is polynomial if we can
recursively apply its two lifting rules (defined next), the power rule and the generalized binomial
rule, until the treewidth of the remaining ground network is bounded by a constant.
The power rule is based on the concept of a decomposer. Given a normal MLN M, a set of logical
variables, denoted by x, is called a decomposer if it satisfies the following two conditions: (i) Every
atom in M contains exactly one variable from x, and (ii) For any predicate symbol R, there exists a
position s.t. variables from x only appear at that position in atoms of R. Given a decomposer x, it
is easy to show that Z(M) = [Z(M[X/x])]|?x | where x ? x and M[X/x] is the MLN obtained
by substituting all logical variables x in M by the same constant X ? ?x and then converting the
resulting MLN to a normal MLN. Note that for any two variables x, y in x, ?x = ?y by normality.
The generalized binomial rule is used to sample singleton atoms efficiently (the rule also requires that the atom is not involved in self-joins, i.e., it does not appear more than once in
the same formula). Given a normal MLN M having a singleton atom R(x), we can show that
P x | |?x |
?i )w(i)2p(i) where R
?i is a sample of R s.t. exactly i tuples are set to
Z(M) = |?
Z(M|R
i=0
i
i
?
True. M|R is the MLN obtained from M by performing the following steps in order: (i) Ground
all R(x) and set its groundings to have the same assignment as Ri , (ii) Delete formulas that evaluate
to either True or False, (iii) Delete all groundings of R(x) and (iv) Convert the resulting MLN
to a normal MLN. w(i) is the exponentiated sum of the weights of formulas that evaluate to True
and p(i) is the number of ground atoms that are removed from the MLN as a result of removing
formulas (these are essentially don?t care atoms which can be assigned to either True or False).
Now, let us apply the clustering idea to our example
MLN. Let us put each first-order atom in a cluster by
R(x,y)
y
S(y,z)
itself, namely we have three clusters: R(x, y), S(y, z) R(x, y)
S(y, z)
and T(z, u) (see Figure 1(a)). Note that each (first-order)
cluster represents all groundings of all atoms in the
z
z
cluster. To perform Gibbs sampling over this clustering,
we need to compute three conditional distributions:
P (R(x, y)|?
S(y, z), ?
T(z, u)),
P (S(y, z)|?
R(x, y), ?
T(z, u)) T(z, u)
T(z, u)
and P (T(z, u)|?
R(x, y), ?
S(y, z)) where ?
R(x, y) denotes
a truth assignment to all possible groundings of R. Let
(a) Clustering 1
(b) Clustering 2
the domain size of each variable be n. Naively, given an Figure 1: Two possible clusterings for
assignment to all other atoms not in the cluster, we will lifted blocked Gibbs sampling on the exam2
need O(2n ) time and space for computing and specifying ple MLN having two weighted formulas.
the joint distribution at each cluster. This is because there are n2 ground atoms associated with each
cluster. Notice however that all groundings of each first-order atom are conditionally independent
of each other given a truth assignment to all other atoms. In other words, we can apply PTP here
and compute each conditional distribution in O(n3 ) time and space (since there are n3 groundings
of each formula and we need to process each ground formula at least once). Thus, the complexity
of sampling all atoms in all clusters is O(n3 ). Note that the complexity of sampling all variables
using propositional Gibbs sampling is also O(n3 ).
Now, let us consider an alternative clustering in which we have two clusters as shown in Figure
1(b). Intuitively, this clustering is likely to yield better accuracy than the previous one because more
4
atoms will be sampled jointly. Counter-intuitively, however, as we show next, Clustering 2 will yield
a blocked sampler having smaller complexity than the one based on Clustering 1.
To perform blocked Gibbs sampling over Clustering 2, we need to compute two distributions P (R(x, y), S(y, z)|?
T(z, u)), P (T(z, u)|?
R(x, y), ?
S(y, z)). Let us see how PTP will compute
?
P (R(x, y), S(y, z)|T(z, u)). If we instantiate all groundings of T, we get the following reduced
MLN {R(x, y) ? S(y, Zi ), w1 }ni=1 and {S(y, Zi ), ki w2 }ni=1 where Zi ? ?z and ki is the number
of False groundings of T(y, Zi ). This MLN contains a decomposer y. PTP will now apply the
power rule, yielding formulas of the form {R(x, Y ) ? S(Y, Zi ), w1 }ni=1 and {S(Y, Zi ), ki w2 }ni=1
where Y ? ?y . R(x, Y ) is a singleton atom and therefore applying the generalized binomial rule,
we will get n + 1 reduced MLNs, each containing n atoms of the form {S(Y, Zi )}ni=1 . These
atoms are conditionally independent of each other and a distribution over them can be computed
in O(n) time. Thus, the complexity of computing P (R(x, y), S(y, z)|?
T(z, u)) is O(n2 ). Samples
for R and S can be generated from P (R(x, y), S(y, z)|?
T(z, u)) in O(n2 ) time as well. Notice that
P (T(z, u)|?
R(x, y), ?
S(y, z)) = P (T(z, u)|?
S(y, z)) because R is not in the Markov blanket of T. This
distribution can also be computed in O(n2 ) time. Therefore, the complexity of sampling all atoms
using the clustering shown in Figure 1(b) is O(n2 ).
Space Complexity: For Clustering 2, notice that to compute the conditional distribution
P (R(x, y), S(y, z)|?
T(z, u)), we only need to know how many groundings of T(Zi , u) are True in
?
T(z, u) for all Zi ? ?z . Cluster T(z, u) can share this information with its neighbor using only
O(n) space. Similarly, to compute P (T(z, u)|?
S(y, z)) we only need to know how many groundings
of S(y, Zi ) are True in ?
S(y, z) for all Zi ? ?z . This requires O(n) space and thus the overall space
complexity of Clustering 2 is O(n). On the other hand, the space complexity of Gibbs sampling
over Clustering 1 is O(n2 ).
4 The Lifted Blocked Gibbs Sampling Algorithm
Next, we will formalize the discussion in the previous section yielding a lifted blocked Gibbs sampling algorithm. We begin with some required definitions.
We define a cluster as a set of first order atoms (these atoms will be sampled jointly in a lifted Gibbs
sampling iteration). Given a set of disjoint clusters {C1 , . . . , Cm }, the Markov blanket of a cluster
Ci is the set of clusters that have at least one atom that is in the Markov blanket of an atom in Ci .
Given a MLN M, the Gibbs cluster graph is a graph G (each vertex of G is a cluster) such that: (i)
Each atom in the MLN is in exactly one cluster of G (ii) Two clusters Ci and Cj in G are connected
by an edge if Cj is in the Markov blanket of Ci . Note that by definition if Ci is in the Markov
blanket of Cj , then Cj is in the Markov blanket of Ci .
The lifted blocked Gibbs sampling algorithm (see
Algorithm 1) can be envisioned as a message
passing algorithm over a Gibbs cluster graph G.
Each edge (Ci , Cj ) in G stores two messages in
each direction. The message from Ci to Cj contains the current truth assignment to all groundings of all atoms (we will discuss how to represent the truth assignment in a lifted manner
shortly) that are in the Markov blanket of one or
more atoms in Ci . We initialize the messages randomly. Then at each Gibbs iteration, we generate
a sample over all atoms by sampling the clusters
along an ordering (C1 , . . . , Cm ) (Steps 3-10). At
each cluster, we first use PTP to compute a conditional joint distribution over all atoms in the cluster given an assignment to atoms in their Markov
10 end
blanket. This assignment is derived using the incoming messages. Then, we sample all atoms in
the cluster from the joint distribution and update the estimate for query atoms in the cluster as well
as all outgoing messages. We can prove that:
Theorem 1. The Markov chain induced by Algorithm 1 is ergodic and aperiodic and its stationary
distribution is the distribution represented by the input normal MLN.
Algorithm 1: Lifted Blocked Gibbs Sampling
Input: A normal MLN M, a Gibbs cluster graph G, an
integer N and a set of query atoms
Output: A Marginal Distribution over the query atoms
1 begin
2
for t = 1 to N do
3
Let (C1 , . . . , Cm ) be an arbitrary ordering of
clusters of G
// Gibbs iteration
4
for i = 1 to m do
5
M(Ci ) = MLN obtained by instantiating the
Markov Blanket of Ci based on the incoming
messages
6
Compute P (Ci ) by running PTP on M(Ci )
7
Sample a truth assignment to all atoms in Ci
from P (Ci )
8
Update the estimate of all query atoms in Ci
9
Update all outgoing messages from Ci
5
4.1 Lifted Message Representation
We say that a representation of truth assignments to the groundings of an atom is lifted if we only
specify the number of true (or false) assignments to its full or partial grounding.
Example 1. Consider an atom R(x, y), where ?x = {X1 , X2 } and ?y = {Y1 , Y2 }. We can
represent the truth assignment (R(X1 , Y1 ) = 1, R(X1 , Y2 ) = 0, R(X2 , Y1 ) = 1, R(X2 , Y2 ) = 0) in a
lifted manner using either an integer 2 or a vector ([Y1 , 2], [Y2 , 0]). The first representation says that
2 groundings of R(x, y) are true while the second representation says that 2 groundings of R(x, Y1 )
and 0 groundings of R(x, Y2 ) are true.
Next, we state sufficient conditions for representing a message in a lifted manner while ensuring correctness, summarized in Theorem 2. We begin with a required definition. Given an atom
R(x1 , . . . , xp ) and a subset of atoms {S1 , . . . , Sk } from its Markov blanket, we say that a term at
position i in R is a shared term w.r.t. {S1 , . . . , Sk } if there exists a formula f such that in f , a logical
variable appears at position i in R and in one or more atoms in {S1 , . . . , Sk }. For instance, in our
running example, y (position 2) is a shared term of R w.r.t. {S} but x (position 1) is not.
Theorem 2 (Sufficient Conditions for a Lifted Message Representation). Given a Gibbs cluster
graph G and an MLN M, let R be an atom in Ci and let Cj be a neighbor of Ci in G. Let SR,Cj be
the set of atoms formed by taking an intersection between the Markov blanket of R and the union of
the Markov blanket of atoms in Cj . Let x be the set of shared terms of R w.r.t. SR,Cj ? Cj and y
be the set of remaining terms in R. Let the outgoing message from Ci to Cj be represented using a
vector of |?x | pairs of the form [Xk , rk ] where ?x is the Cartesian product of the domains of all
terms in x, Xk ? ?x is the k-th element in ?x and rk is the number of groundings of R(Xk , y) that
are true in the current assignment. If all messages in the lifted Blocked Gibbs sampling algorithm
(Algorithm 1) use the aforementioned representation, then the stationary distribution of the Markov
chain induced by the algorithm is the distribution represented by the input normal MLN.
Proof. (Sketch). The generalized Binomial rule states that all MLNs obtained by conditioning on a
singleton atom S with exactly k of its groundings set to true are equivalent to each other. In other
words, in order to compute the distribution represented by the MLN conditioned on S, we only need
to know how many groundings of S are set to true. Next, we will show that the atom obtained by
(partially) grounding the shared terms x of an atom R in cluster Ci , namely R(Xk , y) (where y is
the set of terms of R that are not shared) is equivalent to a singleton atom and therefore knowing the
number of groundings of R(Xk , y) that are set to true is sufficient to compute the joint distribution
over the atoms in cluster Cj , where Ci and Cj are neighbors in G.
Consider the MLN M? which is obtained from M by first removing all formulas that do not mention
atoms in Cj and then (partially) grounding all the shared terms of R. Let y ? be a logical variable such
that its domain ?y? = ?y , where ?y is the Cartesian product of the domains of all variables in y
and let R?k (y ? ) = R(Xk , y) where Xk ? ?x is the k-th element in ?x . Notice that we can replace
each atom R(Xk , y) in M? by R?k (y ? ) without changing the associated distribution. Moreover, each
atom R?k (y ? ) is a singleton and therefore it follows from the generalized Binomial rule that in order
to compute the distribution associated with M? conditioned on R?k (y ? ), we only need to know how
many of its possible groundings are true. Since Ci sends precisely this information to Cj using the
message defined in the statement of this theorem, it follows that the lifted Blocked Gibbs sampling
algorithm which uses a lifted message representation is equivalent to the algorithm (Algorithm 1)
that uses a propositional representation. Since Algorithm 1 converges to the distribution represented
by the MLN (Theorem 1), the proof follows.
4.2 Complexity
Theorem 2 provides a method for representing the messages succinctly by taking advantage of the
symmetry at inference time. It also generalizes the ideas presented in the previous section (last
paragraph) and helps us bound the space complexity of each message. Formally,
Theorem 3 (Space Complexity of a Message). Given a Gibbs cluster graph G and an MLN M,
let the outgoing message from cluster Ci to cluster Cj in G be defined over the set {R1 , . . . , Rk } of
atoms. Let xi denote the set of shared terms of Ri that satisfy the conditions outlined in Theorem 2.
Pk
Then, the space complexity of representing the message is O( i=1 |?xi |).
Note that the time/space requirements of the algorithm is the sum of the time/space required to run
PTP for a cluster and the time/space for the message from the cluster. We can compute the time
6
and space complexity of PTP at a cluster by running it schematically as follows. We apply the
power rule as before but explore only one randomly selected branch in the search tree induced by
the generalized binomial rule. Recall that applying the generalized binomial rule will result in n + 1
recursive calls (i.e, the search tree node has branching factor of n + 1) where n is the domain size of
the singleton atom. If neither the power rule nor the generalized binomial rule can be applied at any
point during search, the complexity of PTP is exponential in the treewidth of the remaining ground
network. More precisely, the complexity of PTP is O(exp(g) ? exp(w + 1)) where g is the number
of times the generalized binomial rule is applied and w is the treewidth (computed heuristically) of
the remaining ground network.
4.3 Constructing the Gibbs Cluster Graph
Next, we present a heuristic algorithm for constructing the Gibbs cluster graph. From a computational view point, we want its time and
space requirements to be as small as possible.
From an approximation quality viewpoint, to
improve mixing, we want to jointly sample, i.e.,
cluster together highly coupled/correlated variables. Formally, we want to
X
?(Ci ),
Maximize:
Algorithm 2: Construct Gibbs Cluster Graph
Input: A normal MLN M, complexity bounds ? and ?
Output: A Gibbs cluster graph G
1 begin
2
Initialization: Construct a Gibbs cluster graph G
with exactly one atom in each cluster
3
while True do
4
F = ? // F: Set of feasible
cluster graphs
5
for all pairs of clusters Ci and Cj in G do
6
Merge Ci and Cj yielding a cluster graph G?
7
if T (G? ) ? T (G) and S(G? ) ? S(G) then
8
Add G? to F
9
10
?
i
Subject to: S(G) ? ?, T (G) ? ?
?
else if T (G ) ? ? and S(G ) ? ? then
Add G? to F
where S(G) and T (G) denote the time and
space requirements of the Gibbs cluster graph
12
G, ?(Ci ) measures the amount of coupling in
the cluster Ci of G, and parameters ? and ? are
13 end
used to bound the time and space complexity
respectively. In our implementation, we measure coupling using the number of times two atoms
appear together in a formula.
11
If F is empty return G
G = Cluster graph in F that has the maximum
P
i ?(Ci )
The optimization problem is NP-hard in general and therefore we propose to use the greedy approach
given in Algorithm 2 for solving it. The algorithm begins by constructing a Gibbs cluster graph in
which each first-order atom is in a cluster by itself. Then, in the while loop, the algorithm tries
to iteratively improve the cluster graph. At each iteration, given the current cluster graph G, for
every possible pair of clusters (Ci , Cj ) of G, the algorithm creates a new cluster graph G? from G
by merging Ci and Cj . Among these graphs, the algorithm selects the graph that yields the most
coupling and at the same time either has smaller complexity than G or satisfies the input complexity
bounds ? and ?. It then replaces G with the selected graph and iterates until the graph cannot be
improved. Note that increasing the cluster size may decrease the complexity of the cluster graph in
some cases and therefore we require steps 6 and 7 which add G? to the feasible set if its complexity is
smaller than G. Also note that the algorithm is not guaranteed to return a cluster graph that satisfies
the input complexity bounds, even if such a cluster graph exists. If the algorithm fails then we may
have to use local search or dynamic programming; both are computationally expensive.
5 Experiments
In this section, we compare the performance of lifted blocked Gibbs sampling (LBG) with (propositional) blocked Gibbs sampling (BG), lazy MC-SAT [26, 27] and lifted belief propagation (LBP)
[30]. We experimented with the following four MLNs: (i) A RST MLN having two formulas, M1 : [R(x) ? S(x, y), w1 ]; [S(x, y) ? T(y, z)], (ii) A toy Smoker-Asthma-Cancer MLN
having three formulas, M3 : [Asthma(x) ? ?Smokes(x)], [Asthma(x) ? Friends(x, y) ?
?Smokes(y)], [Smoke(x) ? Cancer(x)], (iii) The example R, S, T MLN defined in Section 3, M3
and (iv) WEBKB MLN, M4 used in [17]. Note that the first two MLNs can be solved in polynomial
time using PTP while PTP is exponential on M3 and M4 . For each MLN, we set 10% randomly
selected ground atoms as evidence. We varied the number of objects in the domain from 5 to 200.
We used a time-bound of 1000 seconds for all algorithms.
7
0.1
0.1
0.01
0.001
0.0001
1e-05
0.1
BG
MC-SAT
LBP
LBG
0.01
0.01
0.001
0.001
0.0001
0
100
200
300
400
500
600
700
800
0.0001
0
100
200
Time(seconds)
300
400
500
600
700
800
50
LBG
BG
300
350
400
400
LBG
BG
100
10
1
250
350
(c)
10
0.0001
300
Time(s)
log(R)
0.001
250
1000
100
200
200
Time(s)
Time(s)
0.01
150
150
(b)
1000
LBG
BG
100
100
Time(seconds)
(a)
0.1
50
LBG
BG
log(R)
BG
MC-SAT
LBP
LBG
Average KL divergence
Average KL divergence
1
1
0
Time(s)
(d)
20
40
60
80
100
120
Num-objects
(e)
140
160
180
200
0
20
40
60
80
100
120
140
160
180
200
Num-objects
(f)
Figure 2: KL divergence as a function of time for: (a) M1 with 50 objects and (b) M2 with 50 objects.
Convergence diagnostic using Gelman-Rubin statistic (R) for (c) M3 with 25 objects and (d) M4 with 25
objects. Note that for lifted BP, the values displayed are the ones obtained after the algorithm has converged.
Time required by 100 Gibbs iterations as a function of the number of objects for (e) M3 and (f) M4 .
We implemented LBG and BG in C++ and used alchemy [12] to implement MC-SAT and LBP.
For LBG, BG and MC-SAT, we used a burn-in of 100 samples to negate the effects of initialization. For M1 and M2 , we measure the accuracy using the KL divergence between the estimated
marginal probabilities and the true marginal probabilities computed using PTP. Since computing exact marginals of M3 and M4 is not feasible, we perform convergence diagnostics for LBG and BG
using the Gelman-Rubin statistic [5], denoted by R. R measures the disagreement between chains
by comparing the between-chain variances with the within-chain variances. The closer the value of
R to 1, the better the mixing.
Figure 2 shows the results. Figures 2(a) and 2(b) show the KL divergence as a function of time for
M1 and M2 respectively. In both cases, LBG converges much faster than BG and MC-SAT and
has smaller error. LBP is more accurate than LBG on M1 while LBG is more accurate than LBP on
M2 . Figures 2(c) and 2(d) show log(R) as a function of time for M3 and M4 respectively. We see
that the Markov chain associated with LBG mixes much faster than the one associated with BG. To
measure scalability, we use running time per Gibbs iteration as a performance metric. Figures 2(e)
and 2(f) show the time required by 100 Gibbs iterations as a function of number of objects for M3
and M4 respectively. They clearly demonstrates that LBG is more scalable than BG.
6 Summary and Future Work
In this paper, we proposed lifted Blocked Gibbs sampling, a new algorithm that improves blocked
Gibbs sampling by exploiting relational or first-order structure. Our algorithm operates by constructing a Gibbs cluster graph, which represents a partitioning of atoms into clusters and then performs
message passing over the graph. Each message is a truth assignment to the Markov blanket of
the cluster and we showed how to represent it in a lifted manner. We proposed an algorithm for
constructing the Gibbs cluster graph and showed that it can be used to trade accuracy with computational complexity. Our experiments demonstrate clearly that lifted blocked Gibbs sampling is more
accurate and scalable than propositional blocked Gibbs sampling as well as MC-SAT.
Future work includes: lifting Rao-Blackwellised Gibbs sampling; applying our lifting rules to slice
sampling [22] and flat histogram MCMC [4]; developing new clustering strategies; etc.
Acknowledgements: This research was partly funded by the ARO MURI grant W911NF-08-10242. The views and conclusions contained in this document are those of the authors and should not
be interpreted as necessarily representing the official policies, either expressed or implied, of ARO
or the U.S. Government.
8
References
[1] M. Chavira and A. Darwiche. On probabilistic inference by weighted model counting. Artificial Intelligence, 172(6-7):772?799, 2008.
[2] R. de Salvo Braz. Lifted First-Order Probabilistic Inference. PhD thesis, University of Illinois, UrbanaChampaign, IL, 2007.
[3] P. Domingos and D. Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan &
Claypool, San Rafael, CA, 2009.
[4] S. Ermon, C.P. Gomes, A. Sabharwal, and B. Selman. Accelerated Adaptive Markov Chain for Partition
Function Computation. In NIPS, pages 2744?2752, 2011.
[5] A. Gelman and D. B. Rubin. Inference from iterative simulation using multiple sequences. Statistical
Science, 7(4):457?472, 1992.
[6] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of
images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721?741, 1984.
[7] L. Getoor and B. Taskar, editors. Introduction to Statistical Relational Learning. MIT Press, 2007.
[8] V. Gogate and P. Domingos. Probabilistic theorem proving. In UAI, pages 256?265, 2011.
[9] V. Gogate, A. Jha, D. Venugopal. Advances in Lifted Importance Sampling. In AAAI, pages 1910?1916,
2012.
[10] C. S. Jensen, U. Kjaerulff, and A. Kong. Blocking gibbs sampling in very large probabilistic expert
systems. International Journal of Human Computer Studies. Special Issue on Real-World Applications of
Uncertain Reasoning, 42:647?666, 1993.
[11] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted inference from the other side: The tractable features.
In NIPS, pages 973?981, 2010.
[12] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, and P. Domingos. The Alchemy system for
statistical relational AI. Technical report, Department of Computer Science and Engineering, University
of Washington, Seattle, WA, 2006. http://alchemy.cs.washington.edu.
[13] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press,
2009.
[14] P. Liang, M. I. Jordan, and D. Klein. Type-based MCMC. In HLT-NAACL, pages 573?581, 2010.
[15] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer Publishing Company, Incorporated,
2001.
[16] J. S. Liu, W. H. Wong, and A. Kong. Covariance structure of the Gibbs sampler with applications to the
comparison of estimators and augmentation schemes. Biometrika, 81:27?40, 1994.
[17] D. Lowd and P. Domingos. Recursive random fields. In IJCAI, pages 950?955. 2007.
[18] B. Milch and S. J. Russell. General-purpose MCMC inference over relational structures. In UAI, pages
349?358, 2006.
[19] B. Milch, L. S. Zettlemoyer, K. Kersting, M. Haimes, and L. P. Kaelbling. Lifted probabilistic inference
with counting formulas. In AAAI, pages 1062?1068, 2008.
[20] K. P. Murphy, Y. Weiss, and M. I. Jordan. Loopy Belief propagation for approximate inference: An
empirical study. In UAI, pages 467?475, 1999.
[21] M. Niepert. Markov Chains on Orbits of Permutation Groups. In UAI, pages 624?633, 2012.
[22] Radford Neal. Slice sampling. Annals of Statistics, 31:705?767, 2000.
[23] K. S. Ng, J. W. Lloyd, and W. T. Uther. Probabilistic modelling, inference and learning using logical
theories. Annals of Mathematics and Artificial Intelligence, 54(1-3):159?205, 2008.
[24] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 1988.
[25] D. Poole. First-order probabilistic inference. In IJCAI, pages 985?991, 2003.
[26] H. Poon and P. Domingos. Sound and efficient inference with probabilistic and deterministic dependencies. In AAAI, pages 458?463, 2006.
[27] H. Poon, P. Domingos, and M. Sumner. A general method for reducing the complexity of relational
inference and its application to MCMC. In AAAI, pages 1075?1080, 2008.
[28] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62:107?136, 2006.
[29] T. Sang, P. Beame, and H. Kautz. Solving Bayesian networks by weighted model counting. In AAAI,
pages 475?482, 2005.
[30] P. Singla and P. Domingos. Lifted first-order belief propagation. In AAAI, pages 1094?1099, Chicago, IL,
2008. AAAI Press.
9
| 4511 |@word kong:2 polynomial:4 heuristically:1 simulation:1 prominence:1 covariance:1 thereby:1 mention:1 recursively:1 substitution:1 contains:3 liu:2 document:1 existing:4 current:3 comparing:2 chicago:1 partition:5 drop:1 update:3 stationary:3 greedy:1 instantiate:2 selected:3 intelligence:4 braz:1 mln:39 xk:8 num:2 provides:1 consulting:1 node:1 iterates:1 along:1 constructed:2 consists:2 prove:1 combine:1 paragraph:1 darwiche:1 manner:5 orbital:1 nor:1 beame:1 alchemy:3 company:1 equipped:1 increasing:1 becomes:1 begin:6 parfactors:1 notation:2 bounded:1 moreover:1 webkb:1 connective:3 cm:3 substantially:1 emerging:1 interpreted:1 finding:1 impractical:1 decomposer:4 blackwellised:1 every:3 ti:2 exactly:5 biometrika:1 demonstrates:1 vgogate:1 facto:2 partitioning:2 grant:1 appear:3 arguably:1 before:1 engineering:1 local:1 dallas:2 merge:1 burn:1 initialization:2 equivalence:1 specifying:1 atomic:1 practice:1 block:10 union:1 recursive:2 implement:1 universal:1 empirical:1 projection:1 word:2 get:2 cannot:2 gelman:3 fullest:1 put:1 milch:2 applying:4 wong:1 equivalent:3 deterministic:1 sumner:2 ergodic:2 simplicity:1 m2:4 rule:17 estimator:1 proving:3 handle:5 annals:2 exact:5 programming:1 us:2 domingo:8 element:2 expensive:1 muri:1 geman:2 blocking:5 taskar:1 solved:2 connected:1 ordering:3 trade:2 removed:1 counter:1 decrease:1 russell:1 principled:1 envisioned:1 complexity:27 messy:1 dynamic:1 interchangeable:1 solving:2 upon:1 creates:1 compactly:2 easily:1 joint:5 various:1 tx:2 represented:5 distinct:1 instantiated:2 describe:1 monte:1 query:8 artificial:3 lift:5 disjunction:2 heuristic:1 solve:1 plausible:1 say:4 ability:1 statistic:3 richardson:4 jointly:8 itself:2 advantage:2 sequence:1 propose:3 aro:2 mb:2 product:2 combining:1 loop:1 date:1 mixing:4 poon:3 achieve:1 scalability:4 seattle:1 exploiting:2 convergence:6 cluster:77 billion:1 r1:1 requirement:3 empty:1 rst:1 converges:4 ijcai:2 object:13 help:1 illustrate:1 friend:2 coupling:3 urbanachampaign:1 utdallas:2 implemented:1 c:1 treewidth:5 blanket:16 direction:1 sabharwal:1 aperiodic:2 stochastic:1 kb:7 human:1 ermon:1 ana:3 elimination:2 require:2 government:1 preliminary:2 proposition:2 extension:1 ground:15 normal:11 exp:3 claypool:1 mapping:1 substituting:3 purpose:1 mlns:4 singla:2 correctness:1 tool:3 weighted:8 mit:2 clearly:2 modified:1 srl:9 lifted:43 kersting:1 conjunction:2 derived:1 modelling:1 inference:25 chavira:1 typically:2 entire:1 relation:1 koller:1 selects:1 issue:3 aforementioned:2 ill:1 overall:1 denoted:6 among:1 special:1 initialize:1 marginal:5 field:2 once:3 construct:2 having:7 ng:1 sampling:58 atom:82 identical:1 represents:4 washington:2 future:2 np:1 report:1 intelligent:1 randomly:3 ve:1 divergence:5 m4:7 murphy:1 maintain:1 negation:2 attempt:1 friedman:1 message:26 highly:1 evaluation:1 truly:1 yielding:7 diagnostics:1 suciu:1 chain:9 amenable:1 accurate:5 implication:1 edge:2 closer:1 partial:1 tree:2 iv:2 desired:1 orbit:1 delete:2 uncertain:1 instance:2 modeling:2 earlier:1 soft:1 rao:1 w911nf:1 restoration:1 assignment:23 loopy:1 kaelbling:1 vertex:1 subset:2 predicate:6 dependency:1 answer:1 person:2 international:1 probabilistic:15 systematic:2 receiving:2 meliou:1 connecting:1 together:2 w1:4 thesis:1 aaai:7 augmentation:1 containing:4 literal:2 expert:1 return:2 sang:1 toy:1 de:3 singleton:7 summarized:1 lloyd:1 includes:2 jha:2 satisfy:1 notable:2 mcmc:12 bg:13 view:2 try:1 fol:7 complicated:1 kautz:1 formed:2 cnfs:1 accuracy:9 ni:5 variance:2 who:2 efficiently:1 kaufmann:1 yield:4 identify:2 bayesian:3 mc:10 none:1 carlo:1 bob:3 converged:1 ptp:17 hlt:2 definition:3 typed:2 involved:1 associated:6 conciseness:1 proof:2 sampled:2 popular:1 logical:10 recall:1 knowledge:2 improves:5 cj:21 formalize:1 appears:1 specify:1 improved:1 wei:1 niepert:1 just:1 asthma:8 until:2 hand:1 hastings:1 sketch:1 propagation:5 smoke:9 defines:1 quality:1 lowd:2 vibhav:1 scientific:1 usa:2 grounding:30 effect:1 concept:1 true:18 y2:5 naacl:1 equality:1 assigned:1 iteratively:1 neal:1 conditionally:2 during:1 self:1 branching:1 generalized:9 prominent:1 demonstrate:1 performs:1 interface:1 reasoning:4 image:1 fi:4 superior:2 conditioning:1 million:1 interpretation:1 m1:5 marginals:2 refer:1 blocked:24 gibbs:63 ai:1 outlined:1 pm:1 similarly:1 mathematics:1 illinois:1 softening:1 language:3 funded:1 specification:1 etc:4 base:2 add:3 posterior:1 recent:1 showed:2 store:2 blog:1 binary:1 morgan:2 care:1 converting:1 maximize:1 ii:4 branch:1 full:1 mix:1 multiple:1 sound:1 technical:1 faster:3 adapt:1 ensuring:1 instantiating:1 scalable:2 basic:1 essentially:2 metric:1 iteration:7 represent:5 normalization:1 histogram:1 c1:3 schematically:1 want:3 lbp:6 zettlemoyer:1 else:1 sends:1 dxv021000:1 w2:3 sr:2 subject:2 induced:3 jordan:2 integer:2 call:1 counting:4 iii:2 easy:1 zi:11 idea:8 knowing:1 texas:2 passing:3 amount:2 kok:1 reduced:2 generate:3 http:1 problematic:1 canonical:1 notice:4 estimated:2 disjoint:4 diagnostic:1 per:1 klein:1 herbrand:1 write:1 express:1 group:3 key:2 four:1 pb:1 changing:1 neither:1 graph:30 relaxation:1 convert:1 sum:2 run:1 letter:2 uncertainty:4 extends:1 ki:3 bound:6 layer:1 followed:1 guaranteed:2 replaces:1 constraint:4 pgm:1 precisely:2 bp:3 ri:2 n3:4 x2:3 flat:1 haimes:1 argument:1 performing:2 department:3 developing:1 smaller:4 wi:2 metropolis:1 s1:3 intuitively:2 quantifier:3 computationally:1 resource:1 discus:1 know:4 tractable:1 end:2 generalizes:2 apply:5 worthwhile:1 generic:1 disagreement:1 occurrence:1 alternative:3 shortly:1 binomial:9 remaining:4 ensure:1 clustering:15 denotes:1 graphical:4 running:4 publishing:1 exploit:3 implied:1 font:1 strategy:2 unable:2 extent:1 trivial:1 reason:1 gogate:4 balance:1 liang:1 difficult:1 unfortunately:2 statement:1 implementation:1 policy:1 perform:4 allowing:1 upper:1 markov:35 benchmark:1 finite:2 displayed:1 relational:11 incorporated:1 y1:5 varied:1 arbitrary:3 propositional:15 pair:4 required:6 namely:2 kl:5 sentence:2 pearl:1 salvo:1 nip:2 poole:2 pattern:1 memory:1 belief:5 power:7 getoor:1 difficulty:1 circumvent:1 advanced:2 normality:1 scheme:3 improve:3 representing:4 coupled:1 existential:2 acknowledgement:1 permutation:1 kjaerulff:1 sufficient:3 xp:1 uther:1 rubin:3 principle:2 viewpoint:1 editor:1 share:2 cancer:4 succinctly:1 summary:1 last:1 free:1 side:1 exponentiated:1 neighbor:3 taking:3 deepak:1 pgms:6 slice:2 xn:3 world:6 rich:1 author:1 collection:1 selman:1 projected:2 san:2 adaptive:1 ple:1 polynomially:1 transaction:1 approximate:2 rafael:1 logic:18 incoming:2 uai:4 sat:10 xt1:2 gomes:1 francisco:1 tuples:1 xi:20 don:1 search:4 iterative:1 sk:3 parenthesized:1 correlated:1 ca:2 symmetry:4 complex:3 necessarily:1 constructing:6 domain:12 official:1 venugopal:2 pk:1 main:7 n2:6 x1:7 join:1 fails:1 position:8 lbg:15 exponential:3 answering:1 third:1 theorem:11 formula:36 removing:2 rk:3 jensen:1 symbol:4 list:1 experimented:1 negate:1 evidence:6 grouping:2 exists:4 naively:1 false:5 merging:1 effectively:2 gained:1 importance:3 ci:32 lifting:7 phd:1 execution:1 conditioned:2 cartesian:2 smoker:1 intersection:1 simply:1 likely:1 sender:1 explore:1 lazy:2 expressed:2 ordered:1 contained:1 partially:2 springer:1 radford:1 truth:12 satisfies:4 conditional:5 goal:1 exposition:1 shared:7 replace:1 feasible:4 hard:2 reducing:2 operates:1 averaging:1 sampler:2 called:6 partly:1 experimental:2 attempted:1 m3:8 formally:2 il:2 internal:1 scan:3 brevity:1 accelerated:1 evaluate:3 outgoing:4 handling:2 |
3,880 | 4,512 | Affine Independent Variational Inference
Edward Challis
David Barber
Department of Computer Science
University College London, UK
{edward.challis,david.barber}@cs.ucl.ac.uk
Abstract
We consider inference in a broad class of non-conjugate probabilistic models
based on minimising the Kullback-Leibler divergence between the given target
density and an approximating ?variational? density. In particular, for generalised
linear models we describe approximating densities formed from an affine transformation of independently distributed latent variables, this class including many
well known densities as special cases. We show how all relevant quantities can
be efficiently computed using the fast Fourier transform. This extends the known
class of tractable variational approximations and enables the fitting for example of
skew variational densities to the target density.
1
Introduction
Whilst Bayesian methods have played a significant role in machine learning and related areas (see
[1] for an introduction), improving the class of distributions for which inference is either tractable
or can be well approximated remains an ongoing challenge. Within this broad field of research,
variational methods have played a key role by enabling mathematical guarantees on inferences (see
[28] for an overview). Our contribution is to extend the class of approximating distributions beyond
classical forms to approximations that can possess skewness or other non-Gaussian characteristics,
while maintaining computational efficiency.
We consider approximating the normalisation constant Z of a probability density function p(w)
Z Y
N
N
1 Y
fn (w) with Z =
fn (w)dw
(1.1)
p(w) =
Z n=1
n=1
where w ? RD and fn : RD ? R+ are potential functions. Apart from special cases, evaluating
Z and other marginal quantities of p(w) is difficult due to the assumed high dimensionality D of
the integral. To address this we may find an approximating density q(w) to the target p(w) by
minimising the Kullback-Leibler (KL) divergence
KL(q(w)|p(w)) = hlog q(w)iq(w) ? hlog p(w)iq(w) = ?H [q(w)] ? hlog p(w)iq(w) (1.2)
where hf (x)ip(x) refers to taking the expectation of f (x) with respect to the distribution p(x) and
H [q(w)] is the differential entropy of the distribution q(w). The non-negativity of the KL divergence provides the lower bound
log Z ? H [q(w)] +
N
X
hlog fn (w)i := B.
(1.3)
n=1
Finding the best parameters ? of the approximating density q(w|?) is then equivalent to maximising
the lower bound on log Z. This KL bounding method is constrained by the class of distributions
1
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
?0.2
?0.2
?0.2
?4
?2
0
2
4
?4
?2
(a)
0
2
4
?4
(b)
?2
0
2
4
(c)
Figure 1: Two dimensional Bayesian sparse linear regression given a single data pair x, y using
1 ?|wd |/?
a Laplace prior fd (w) ? 2?
e
with ? = 0.16 and Gaussian likelihood N y|wT x, ?l2 ,
?l2 = 0.05. (a) True posterior with log Z = ?1.4026. (b) Optimal Gaussian approximation with
bound value BG = ?1.4399. (c) Optimal AI generalised-normal approximation with bound value
BAI = ?1.4026.
p(w) and q(w) for which (1.3) can be efficiently evaluated. We therefore specialise on models of
the form
p(w) ? N (w|?, ?)
N
Y
fn (wT xn )
(1.4)
n=1
N
where {xn }n=1 is a collection of fixed D dimensional real vectors and fn : R ? R+ ; N (w|?, ?)
denotes a multivariate Gaussian in w with mean ? and covariance ?. This class includes Bayesian
generalised linear models, latent linear models, independent components analysis and sparse linear
models amongst others1 . Many problems have posteriors that possess non-Gaussian characteristics
resulting from strongly non-Gaussian priors or likelihood terms. For example, in financial risk modelling it is crucial that skewness and heavy tailed properties of the data are accurately captured [27];
similarly in inverse modelling, sparsity inducing priors can lead to highly non-Gaussian posteriors.
It is therefore important to extend the class of tractable approximating distributions beyond standard
forms such as the multivariate Gaussian [20, 12, 2, 13]. Whilst mixtures of Gaussians [4, 10, 5] have
previously been developed, these typically require additional bounds. Our interest here is to consider
alternative multivariate distribution classes for which the KL method is more directly applicable2 .
2
Affine independent densities
QD
We first consider independently distributed latent variables v ? qv (v|?) = d=1 qvd(vd |?d ) with
?base? distributions qvd. To enrich the representation, we form the affine transformation w = Av+b
where A ? RD?D is invertible and b ? RD . The distribution on w is then3
Z
Y
1
qvd A?1 (w ? b) d |?d (2.1)
qw (w|A, b, ?) = ? (w ? Av ? b) qv (v|?)dv =
|det (A) |
d
where ? (x) = i ?(xi ) is the Dirac delta function, ? = [?1 , ..., ?d ] and [x]d refers to the dth
element of the vector x. Typically we assume the base distributions are homogeneous, qvd ? qv . For
instance, if we constrain
each factor qvd(vd |?d ) to be the standard normal N (vd |0, 1) then qw (w) =
N w|b, AAT . By using, for example, Student?s t, Laplace, logistic, generalised-normal or skewnormal base distributions, equation (2.1) parameterises multivariate extensions of these univariate
distributions. This class of multivariate distributions has the important property that, unlike the
Q
For p(w) in this model class and Gaussian q(w) = N w|m, CT C , B is tighter than ?local? bounds
[14, 11, 22, 18, 16]. For log-concave f , B is jointly concave in (m, C) for C the Cholesky matrix [7].
2
The skew-normal q(w) recently discussed in [21] possesses skew in one direction of parameter space only
and is a special case of the AI skew-normal densities used in section 4.2.
3
This construction is equivalent to a form of square noiseless Independent Components Analysis. See [9]
and [25] for similar constructions.
1
2
Gaussian, they can approximate skew and/or heavy-tailed p(w). See figures 1, 2 and 3, for examples
of two dimensional distributions qw (w|A, b, ?) with skew-normal and generalised-normal base
distributions used to approximate toy machine learning problems.
Provided we choose a base distribution class that includes the Gaussian as a special case (for example
generalised-normal, skew-normal and asymptotically Student?s t) we are guaranteed to perform at
least as well as classical multivariate Gaussian KL approximate inference.
We note that we may arbitrarily permute the indices of v. Furthermore, since every invertible matrix
is expressible as LUP for L lower, U upper and P permutation matrices, without loss of generality,
we may use an LU decomposition A = LU; this reduces the complexity of subsequent computations.
Whilst defining such Affine Independent (AI) distributions is straightforward, critically we require
that the bound, equation (1.3), is fast to compute. As we explain below, this can be achieved using
the Fourier transform both for the bound and its gradients. Full derivations, including formulae for
skew-normal and generalised-normal base distributions, are given in the supplementary material.
2.1
Evaluating the KL bound
The KL bound can be readily decomposed as
B = log |det (A)| +
D
X
H [q(vd |?d )] + hlog N (w|?, ?)i +
(2.2)
n=1
d=1
|
N D
E
X
log fn (wT xn )
{z
}
Entropy
|
{z
}
Energy
P
where we used H [qw (w)] = log |det (A)| + d H [qvd(vd |?d )] (see for example [8]). For many
standard base distributions the entropy H [qvd(vd |?d )] is closed form. When the entropy of a univariate base distribution is not analytically available, we assume it can be cheaply evaluated numerically. The energy contribution to the KL bound is the sum of the expectation of the log Gaussian
term (which requires only first and second order moments) and the nonlinear ?site projections?. The
non-linear site projections (and their gradients) can be evaluated using the methods described below.
2.1.1
Marginal site densities
Defining y := wT x, the expectation of the site
projection
for any function g and fixed vector x is
equivalent to a one-dimensional expectation, g wT x qw (w) = hg(y)iqy (y) with
Z
Z
qy (y) = ?(y ? xT w)qw (w)dw = ?(y ? ?T v ? ?)qv (v)dv
(2.3)
where w = Av + b and ? := AT x, ? := bT x. We can rewrite
this D-dimensional integral as a
R
one dimensional integral using the integral transform ?(x) = e2?itx dt:
Z Z
qy (y) =
e2?it(y??
T
v??)
D
Y
Z
qvd(vd )dvdt =
d=1
e2?i(t??)y
D
Y
q?ud (t) dt
(2.4)
d=1
where f?(t) denotes the Fourier transform of f (x) and qud (ud |?d ) is the density of the random
variable ud := ?d vd so that qud (ud |?d ) = |?1d | qvd( ?udd |?d ). Equation(2.4) can be interpreted as the
(shifted) inverse Fourier transform of the product of the Fourier transforms of {qud (ud |?d )}.
Unfortunately, most distributions do not have Fourier transforms that admit compact analytic forms
QD
for the product d=1 q?ud (t). The notable exception is the family of stable distributions for which
linear combinations of random variables are also stable distributed (see [19] for an introduction).
With the exception of the Gaussian (the only stable distribution with finite variance), Levy and
Cauchy distributions, stable distributions do not have analytic forms for their density functions and
are analytically expressible only in the Fourier domain. Nevertheless, when qv (v) is stable distributed, marginal quantities of w such as y can be computed analytically in the Fourier domain
[3].
3
2
2
0
0
0
?2
?2
?2
?4
?4
?4
?6
?6
?6
?8
?8
?8
?5
0
5
10
2
?5
(a)
0
5
10
?5
0
(b)
5
10
(c)
Figure 2: Two dimensional Bayesian logistic regression with Gaussian prior N (w|0, 10I) and likelihood fn (w) = ?(?l cn wT xn ), ?l = 5. Here ?(x) is the logistic sigmoid and cn ? {?1, +1} the
class labels; N = 4 data points. (a) True posterior with log Z = ?1.13. (b) Optimal Gaussian
approximation with bound value BG = ?1.42. (c) Optimal AI skew-normal approximation with
bound value BAI = ?1.17.
In general, therefore, we need to resort to numerical methods to compute qy (y) and expectations
with respect to it. To achieve this we discretise the base distributions and, by choosing a sufficiently
fine discretisation, limit the maximal error that can be incurred. As such, up to a specified accuracy,
the KL bound may be exactly computed.
D
First we define the set of discrete approximations to {qud (ud |?d )}d=1 for ud := ?d vd . These
?lattice? approximations are a weighted sum of K delta functions
qud (ud |?d ) ? q?ud (ud ) :=
K
X
Z
lk + 21 ?
?dk ? (ud ? lk ) where ?dk =
q(ud |?d )dud . (2.5)
lk ? 21 ?
k=1
K
The lattice points {lk }k=1 are spaced uniformly over the domain [l1 , lK ] with ? := lk+1 ? lk . The
weighting for each delta spike is the mass assigned to the distribution qud (ud |?d ) over the interval
[lk ? 12 ?, lk + 12 ?].
D
Given the lattice approximations to the densities {qud (ud |?d )}d=1 the fast Fourier transform (FFT)
can be used to evaluate the convolution of the lattice distributions. Doing so we obtain the lattice
approximation to the marginal y = wT x such that (see supplementary section 2.2)
"D
#
K
X
Y
0
?(y ? lk ? ?)?k where ? = ifft
fft [? d ] .
(2.6)
qy (y) ? q?y (y) =
k=1
d=1
? 0d
where ? d is padded with (D ? 1)K zeros,
:= [? d , 0]. The only approximation used in finding
the marginal density is then the discretisation of the base distributions, with the remaining FFT
calculations being exact. The time complexity for the above procedure scales O D2 K log KD .
2.1.2
Efficient site derivative computation
Whilst we have shown that the expectation of the site projections can be accurately computed using
the FFT, how to cheaply evaluate the derivative
of this
term is less clear. The complication can be
seen by inspecting the partial derivative of g(wT x) with respect to Amn
Z
E
? D
g(wT x)
= xn qv (v)g 0 xT Av + bT x vm dv,
(2.7)
?Amn
q(w)
d
where g 0 (y) = dy
g(y). Naively, this can be readily reduced to a (relatively expensive) two dimensional integral. Critically, however, the computation can be simplified to a one dimensional integral.
To see this we can write
Z
Z
? D T E
g w x = xn g 0 (y)dm (y)dy, where dm (y) := vm qv (v)? y ? ?T v ? ? dv.
?Amn
4
0.25
0.25
0.2
0.2
0.25
0.2
0.15
0.15
0.15
0.1
0.1
0.1
0.05
0.05
0.05
0
0
0
?0.05
?0.05
?0.05
?0.1
?0.1
?0.2
0
0.2
(a)
0.4
?0.1
?0.2
0
(b)
0.2
0.4
?0.2
0
0.2
0.4
(c)
Figure 3: Two dimensional robust linear regression with Gaussian prior N (w|0, I), Laplace likeT
lihood fn (w) = 2?1 l e?|yn ?w xn |/?l with ?l = 0.1581 and 2 data pairs xn , yn . (a) True posterior
with log Z = ?3.5159. (b) Optimal Gaussian approximation with bound value BG = ?3.6102. (c)
Optimal AI generalised-normal approximation with bound value BAI = ?3.5167.
Here dm (y) is a univariate weighting function with Fourier transform:
Z
Y
um
?2?it?
?
dm (t) = e
e?m (t)
q?ud (t), where e?m (t) :=
qu (um )e?2?itum dum .
?m m
d6=m
D
Since {?
q (t)}d=1 are required to compute the expectation of g(wT x) the only additional computaD
tions needed to evaluate all partial derivatives with respect to A are {?
ed (t)}d=1 . Thus the complexity
of computing the site derivative4 is equivalent to the complexity of the site expectation of section
2.1.1.
Even for non-smooth functions g the site gradient has the additional property that it is smooth, provided the base distributions are smooth. Indeed, this property extends to the KL bound itself, which
has continuous partial derivatives, see supplementary material section 1. This means that gradient
optimisation for AI based KL approximate inference can be applied, even when the target density
is non-smooth. In contrast, other deterministic approximate inference routines are not universally
applicable to non-smooth target densities ? for instance the Laplace approximation and [10] both
require the target to be smooth.
2.2
Optimising the KL bound
Given fixed base distributions, we can optimise the KL bound with respect to the parameters A =
N
LU and b. Provided {fn }n=1 are log-concave the KL bound is jointly concave with respect to b and
either L or U. This follows from an application of the concavity result in [7] ? see the supplementary
material section 3.
Using a similar approach to that presented in section 2.1.2 we can also efficiently evaluate the
gradients of the KL bound with respect to the parameters ? that define the base distribution. These
parameters ? can control higher order moments of the approximating density q(w) such as skewness
and kurtosis. We can therefore jointly optimise over all parameters {A, b, ?} simultaneously; this
means that we can fully capitalise on the expressiveness of the AI distribution class, allowing us to
capture non-Gaussian structure in p(w).
In many modeling scenarios the best choice for qv (v) will suggest itself naturally. For example,
in section 4.1 we choose the skew-normal distribution to approximate Bayesian logistic regression
posteriors. For heavy-tailed posteriors that arise for example in robust or sparse Bayesian linear
regression models, one choice is to use the generalised-normal as base density, which includes the
Laplace and Gaussian distributions as special cases. For other models, for instance mixed data factor
D
analysis [15], different distributions for blocks of variables of {vd }d=1 may be optimal. However, in
situations for which it is not clear how to select qv (v), several different distributions can be assessed
and then that which achieves the greatest lower bound B is preferred.
4
Further derivations and computational scaling properties are provided in supplementary section 2.
5
2.3
Numerical issues
The computational burden of the numerical marginalisation procedure described in section 2.1.1
depends on the number of lattice points used to evaluate the convolved density function qy (y). For
the results presented we implemented a simple strategy for choosing the lattice points [l1 , ..., lK ].
Lattice end points were chosen5 such
P that [l1 , lK ] = [?6?y , 6?y ] where ?y is the standard deviation
of the random variable y: ?y2 = d ?d2 var(vd ). From Chebyshev?s inequality, taking six standard
deviation end points guarantees that we capture at least 97% of the mass of qy (y). In practice this
proportion is often much higher since qy (y) is often close to Gaussian for D 1. We fix the
number of lattice points used during optimisation to suit our computational budget. To compute the
final bound value we apply the simple strategy of doubling the number of lattice points until the
evaluated bound changes by less than 10?3 [6].
Fully characterising the overall accuracy of the approximation as a function of the number of lattice
points is complex, see [24, 26] for a related discussion. One determining factor is the condition
number (ratio of largest and smallest eigenvalues) of the posterior covariance. When the condition number is large many lattice points are needed to accurately discretise the set of distributions
D
{qud (ud |?d )}d=1 which increases the time and memory requirements.
One possible route to circumventing these issues is to use base densities that have analytic Fourier
transforms (such as a mixture of Gaussians). In such cases the discrete Fourier transform of
qy (y) can be directly evaluated by computing the product of the Fourier transforms of each
D
{qud (ud |?d )}d=1 . The implementation and analysis of this procedure is left for future work.
The computational bottleneck for AI inference, assuming N > D, arises from computing the expectation and partial derivatives of the N site projections. For parameters w ? RD this scales
O N D2 K log DK . Whilst this might appear expensive it is worth considering it within the
broader scope of lower bound inference methods. It was shown in [7] that exactGaussian KL ap2
proximate inference has bound and gradient computations
which scale O N D . Similarly, local
2
variational bounding methods (see below) scale O N D when implemented exactly.
3
Related methods
Another commonly applied technique to obtain a lower bound for densities of the form of equation
(1.4) is the ?local? variational bounding procedure [14, 11, 22, 18]. Local bounding methods approximate the normalisation constant by bounding each non-conjugate term in the integrand, equation
(1.1), with a form that renders the integral tractable. In [7] we showed that the Gaussian KL bound
dominates the local bound in such models. Hence the AI KL method also dominates the local and
Gaussian KL methods.
Other approaches increase the flexibility of the approximating distribution by expressing qw (w) as a
mixture. However, computing the entropy of a mixture distribution is in general difficult. Whilst one
may bound the entropy term [10, 4], employing such additional bounds is undesirable since it limits
the gains from using a mixture. Another recently proposed method to approximate integrals using
mixtures is split mean field which iterates between constructing soft partitions of the integral domain
and bounding those partitioned integrals [5]. The partitioned integrals are approximated using local
or Gaussian KL bounds. Our AI method is complementary to the split mean field method since one
may use the AI technique to bound each of the partitioned integrals and so achieve an improved
bound.
4
Experiments
For the experiments below6 , AI KL bound optimisation is performed using L-BFGS7 . Gaussian KL
inference is implemented in all experiments using our own package8 .
5
For symmetric densities {qud (ud |?d )} we arranged that their mode coincides with the central lattice point.
All experiments are performed in Matlab 2009b on a 32 bit Intel Core 2 Quad 2.5 GHz processor.
7
We used the minFunc package (www.di.ens.fr/?mschmidt)
8
mloss.org/software/view/308/
6
6
8
0.4
6
ATLPAI?ATLPG
BAI?BG
?3
0.5
0.3
0.2
x 10
4
2
0.1
0
0
0
10
20
Ntrn
30
?1
0
40
(a)
10
20
Ntrn
30
40
(b)
Figure 4: Gaussian KL and AI KL approximate inference comparison for a Bayesian logistic regression model with different training data set sizes Ntrn . w ? R10 ; Gaussian prior N (w|0, 5I);
logistic sigmoid likelihood fn = ?(?l cn wT xn ) with ?l = 5; covariates xn sampled from the standard normal, wtrue sampled from the prior and class labels cn = ?1 sampled from the likelihood.
(a) Bound differences, BAI ? BG , achieved using Gaussian KL and AI KL approximate inference
for different training dataset sizes Ntrn . Mean and standard errors are presented from 15 randomly
generated models. A logarithmic difference of 0.4 corresponds to 49% improvement in the bound
on the marginal likelihood. (b) Mean and standard error averaged test set log probability (ATLP)
differences obtained with the Gaussian and AI approximate posteriors for different training dataset
sizes Ntrn . ATLP values calculated using 104 test data points sampled from each model.
4.1
Toy problems
We compare the performance of Gaussian KL and AI KL approximate inference methods in three
different two dimensional generalised linear models against the true posteriors and marginal likelihood values obtained numerically. See supplementary section 4 for derivations. Figure 1 presents
results for a linear regression model with a sparse Laplace prior; the AI base density is chosen to be
generalised-normal. Figure 2 demonstrates approximating a Bayesian logistic regression posterior,
with the AI base distribution skew-normal. Figure 3 corresponds to a Bayesian linear regression
model with the noise robust Laplace likelihood density and Gaussian prior; again the AI approximation uses the generalised-normal as the base distribution. The AI KL procedure achieves a consistently higher bound than the Gaussian case, with the AI bound nearly saturating at the true value
of log Z in two of the models. In addition, the AI approximation captures significant non-Gaussian
features of the posterior: the approximate densities are sparse in directions of sparsity of the posterior; their modes are approximately equal (where the Gaussian mode can differ significantly); tail
behaviour is more accurately captured by the AI distribution than by the Gaussian.
4.2
Bayesian logistic regression
We compare Gaussian KL and AI KL approximate inference for a synthetic Bayesian logistic regression model. The AI density has skew-normal base distribution with ?d parameterising the skewness
of vd . We optimised the AI KL bound jointly with respect to L, U, b and ? simultaneously with
convergence taking on average 8 seconds with D = N = 10, compared to 0.2 seconds for Gaussian
KL9 . In figure 4 we plot the performance of the KL bound for the Gaussian versus the skew-normal
AI approximation as we vary the number of datapoints. In (a) we plot the mean and standard error
bound differences BAI ? BG . For a small number of datapoints the bound difference is small. This
difference increases up to D = N , and then decreases for larger datasets. This behaviour can be
explained by the fact that when there are few datapoints the Gaussian prior dominates, with little difference therefore between the Gaussian and optimal AI approximation (which becomes effectively
Gaussian). As more data is introduced, the non-Gaussian likelihood terms have a stronger impact
and the posterior becomes significantly non-Gaussian. However as even more data is introduced the
central limit theorem effect takes hold and the posterior becomes increasingly Gaussian. In (b) we
9
We note that split mean field approximate inference was reported to take approximately 100 seconds for a
similar logistic regression model achieving comparable results [20].
7
plot the mean and standard error differences for the averaged test set log probabilities (ATLP) calculated using the Gaussian and AI approximate posteriors obtained in each model. For each model
and each training set size the ATLP is calculated using 104 test points sampled from the model. The
log test set probability of each test data pair x? , c? is calculated as log hp(c? |w, x? )iq(w) for q(w)
the approximate posterior. The bound differences can be seen to be strongly correlated with test set
log probability differences, confirming that tighter bound values correspond to improved predictive
performance.
4.3
Sparse robust kernel regression
In this experiment we consider sparse Q
noise robust kernel regression. Sparsity is encoded using
a Laplace prior on the weight vectors d fd (wd ) where fd (wd ) = e?|wd |/?p /2?p . The Laplace
T
distribution is also used as a noise robust likelihood fn (w) = p(yn |w, kn ) = e?|yn ?w kn |/?l /2?l
where kn is the nth vector of the kernel matrix. The squared exponential kernel was used throughout
with length scale 0.05 and additive noise 1, see [23]. In all experiments the prior and likelihood were
fixed with ?p = ?l = 0.16. Three datasets are considered: Boston housing10 (D = 14, Ntrn = 100,
Ntst = 406); Concrete Slump Test11 (D = 10, Ntrn = 100, Ntst = 930); a synthetic dataset
constructed as described in [17] ?5.6.1 (D = 10, Ntrn = 100, Ntst = 406). Results are collected
for each data set over 10 random training and test set partitions. All datasets are zero mean unit
variance normalised based on the statistics of the training data.
Dataset
B?G
B?AI
B?AI ? B?G
AT LPG
AT LPAI
AT LPAI ? AT LPG
Conc. CS.
Boston
Synthetic
?2.08 ? 0.09
?1.28 ? 0.05
?2.49 ? 0.10
?2.06 ? 0.09
?1.25 ? 0.05
?2.46 ? 0.10
0.022 ? 0.004
0.028 ? 0.003
0.028 ? 0.004
?1.70 ? 0.11
?1.18 ? 0.10
?1.84 ? 0.11
?1.67 ? 0.11
?1.15 ? 0.09
?1.83 ? 0.11
0.024 ? 0.010
0.023 ? 0.006
0.009 ? 0.009
AI KL inference is performed with a generalised-normal base distribution. The parameters ?d control the kurtosis of the base distributions q(vd |?d ); for simplicity we fix ?d = 1.5 and optimise jointly
for L, U, b. Bound optimisation took roughly 250 seconds for the AI KL procedure, compared to
5 seconds for the Gaussian KL procedure. Averaged results and standard errors are presented in
the table above where B? denotes the bound value divided by the number of points in the training
dataset. Whilst the improvements for these particular datasets are modest, the AI bound dominates
the Gaussian bound in all three datasets, with predictive log probabilities also showing consistent
improvement.
Whilst we have only presented experimental results for AI distributions with simple analytically
expressible base distributions we note the method is applicable for any base distribution provided
D
{qvd(vd )}d=1 are smooth. For example smooth univariate mixtures for qvd(vd ) can be used.
5
Discussion
Affine independent KL approximate inference has several desirable properties compared to existing
deterministic bounding methods. We?ve shown how it generalises on classical multivariate Gaussian
KL approximations and our experiments confirm that the method is able to capture non-Gaussian
effects in posteriors. Since we optimise the KL divergence over a larger class of approximating densities than the multivariate Gaussian, the lower bound to the normalisation constant is also improved.
This is particularly useful for model selection purposes where the normalisation constant plays the
role of the model likelihood.
There are several interesting areas open for further research. The numerical procedures presented
in section 2.1 provide a general and computationally efficient means for inference in non-Gaussian
densities whose application could be useful for a range of probabilistic models. However, our current
understanding of the best approach to discretise the base densities is limited and further study of this
is required, particularly for application in very large systems D 1. It would also be useful to
investigate using base densities that directly allow for efficient computation of the marginals qy (y)
in the Fourier domain.
10
11
archive.ics.uci.edu/ml/datasets/Housing
archive.ics.uci.edu/ml/datasets/Concrete+Slump+Test
8
References
[1] D. Barber. Bayesian Reasoning and Machine Learning. Cambridge University Press, 2012.
[2] D. Barber and C. M. Bishop. Ensemble Learning for Multi-Layer Networks. In Advances in Neural
Information Processing Systems, NIPS 10, 1998.
[3] D. Bickson and C. Guestrin. Inference with Multivariate Heavy-Tails in Linear Models. In Advances in
Neural Information Processing Systems, NIPS 23. 2010.
[4] C. M. Bishop, N. Lawrence, T. Jaakkola, and M. I. Jordan. Approximating Posterior Distributions in
Belief Networks Using Mixtures. In Advances in Neural Information Processing Systems, NIPS 10, 1998.
[5] G. Bouchard and O. Zoeter. Split Variational Inference. In International Conference on Artificial Intelligence and Statistics, AISTATS, 2009.
[6] R. N. Bracewell. The Fourier Transform and its Applications. McGraw-Hill Book Co, Singapore, 2000.
[7] E. Challis and D. Barber. Concave Gaussian Variational Approximations for Inference in Large-Scale
Bayesian Linear Models. In International Conference on Artificial Intelligence and Statistics, AISTATS,
2011.
[8] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, 1991.
[9] J. T. A. S. Ferreira and M. F. J. Steel. A New Class of Skewed Multivariate Distributions with Applications
To Regression Analysis. Statistica Sinica, 17:505?529, 2007.
[10] G. Gersman, M. Hoffman, and D. Blei. Nonparametric Variational Inference. In International Conference
on Machine Learning, ICML 29, 2012.
[11] M. Girolami. A Variational Method for Learning Sparse and Overcomplete Representations. Neural
Computation, 13:2517?2532, 2001.
[12] A. Graves. Practical Variational Inference for Neural Networks. In Advances in Neural Information
Processing Systems, NIPS 24, 2011.
[13] A. Honkela and H. Valpola. Unsupervised Variational Bayesian Learning of Nonlinear Models. In Advances in Neural Information Processing Systems, NIPS 17, 2005.
[14] T. Jaakkola and M. Jordan. A Variational Approach to Bayesian Logistic Regression Problems and their
Extensions. In Artificial Intelligence and Statistics, AISTATS 6, 1996.
[15] M. E. Khan, B. Marlin, G. Bouchard, and K. Murphy. Variational Bounds for Mixed-Data Factor Analysis.
In Advances in Neural Information Processing Systems, NIPS 23, 2010.
[16] D. A. Knowles and T. Minka. Non-conjugate Variational Message Passing for Multinomial and Binary
Regression. In Advances in Neural Information Processing Systems, NIPS 23. 2011.
[17] M. Kuss. Gaussian Process Models for Robust Regression, Classification, and Reinforcement Learning.
PhD thesis, Technische Universit?at Darmstadt, Darmstadt, Germany, 2006.
[18] H. Nickisch and M. Seeger. Convex Variational Bayesian Inference for Large Scale Generalized Linear
Models. In International Conference on Machine Learning, ICML 26, 2009.
[19] J. P. Nolan. Stable Distributions - Models for Heavy Tailed Data. Birkhauser, Boston, 2012. In progress,
Chapter 1 online at academic2.american.edu/?jpnolan.
[20] M. Opper and C. Archambeau. The Variational Gaussian Approximation Revisited. Neural Computation,
21(3):786?792, 2009.
[21] J. Ormerod. Skew-Normal Variational Approximations for Bayesian Inference. Technical Report CRGTR-93-1, School of Mathematics and Statistics, University of Sydney, 2011.
[22] A. Palmer, D. Wipf, K. Kreutz-Delgado, and B. Rao. Variational EM Algorithms for Non-Gaussian Latent
Variable Models. In Advances in Neural Information Processing Systems, NIPS 20, 2006.
[23] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[24] P. Ruckdeschel and M. Kohl. General Purpose Convolution Algorithm in S4-Classes by means of FFT.
Technical Report 1006.0764v2, arXiv.org, 2010.
[25] S. K. Sahu, D. K. Dey, and M. D. Branco. A New Class of Multivariate Skew Distributions with Applications to Bayesian Regression Models. The Canadian Journal of Statistics / La Revue Canadienne de
Statistique, 31(2):129?150, 2003.
[26] P. Schaller and G. Temnov. Efficient and precise computation of convolutions: applying FFT to heavy
tailed distributions. Computational Methods in Applied Mathematics, 8(2):187?200, 2008.
[27] C. Siddhartha, F. Nardari, and N. Shephard. Markov chain Monte Carlo methods for stochastic volatility
models. Journal of Econometrics, 108(2):281?316, 2002.
[28] M. J. Wainwright and M. I. Jordan. Graphical Models, Exponential Families, and Variational Inference.
Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
9
| 4512 |@word proportion:1 stronger:1 open:1 d2:3 lup:1 covariance:2 decomposition:1 delgado:1 moment:2 bai:6 existing:1 current:1 wd:4 readily:2 fn:12 numerical:4 subsequent:1 partition:2 confirming:1 additive:1 enables:1 analytic:3 plot:3 bickson:1 intelligence:3 core:1 blei:1 provides:1 iterates:1 complication:1 revisited:1 org:2 mathematical:1 constructed:1 differential:1 fitting:1 indeed:1 roughly:1 multi:1 decomposed:1 little:1 quad:1 considering:1 becomes:3 provided:5 mass:2 qw:7 interpreted:1 skewness:4 developed:1 whilst:8 finding:2 transformation:2 marlin:1 guarantee:2 every:1 concave:5 exactly:2 um:2 demonstrates:1 ferreira:1 uk:2 control:2 unit:1 universit:1 yn:4 appear:1 generalised:13 aat:1 local:7 limit:3 optimised:1 approximately:2 might:1 co:1 archambeau:1 limited:1 palmer:1 range:1 averaged:3 challis:3 practical:1 practice:1 block:1 revue:1 procedure:8 area:2 significantly:2 projection:5 statistique:1 refers:2 suggest:1 close:1 undesirable:1 selection:1 risk:1 applying:1 www:1 equivalent:4 deterministic:2 straightforward:1 williams:1 independently:2 convex:1 simplicity:1 financial:1 dw:2 datapoints:3 laplace:9 target:6 construction:2 play:1 exact:2 homogeneous:1 us:1 element:2 trend:1 approximated:2 expensive:2 particularly:2 lihood:1 econometrics:1 role:3 capture:4 decrease:1 complexity:4 covariates:1 rewrite:1 predictive:2 efficiency:1 chapter:1 derivation:3 fast:3 describe:1 london:1 monte:1 artificial:3 choosing:2 whose:1 encoded:1 supplementary:6 larger:2 nolan:1 statistic:6 transform:9 jointly:5 itself:2 ip:1 final:1 online:1 housing:1 eigenvalue:1 kurtosis:2 ucl:1 took:1 product:3 maximal:1 fr:1 relevant:1 uci:2 flexibility:1 achieve:2 inducing:1 dirac:1 convergence:1 requirement:1 tions:1 iq:4 volatility:1 ac:1 school:1 progress:1 shephard:1 sydney:1 implemented:3 c:2 edward:2 qd:2 differ:1 direction:2 girolami:1 stochastic:1 material:3 require:3 behaviour:2 darmstadt:2 fix:2 tighter:2 inspecting:1 extension:2 hold:1 sufficiently:1 considered:1 ic:2 normal:23 lawrence:1 scope:1 branco:1 achieves:2 vary:1 smallest:1 purpose:2 applicable:2 label:2 largest:1 ormerod:1 qv:9 weighted:1 hoffman:1 mit:1 gaussian:57 conc:1 broader:1 jaakkola:2 improvement:3 consistently:1 modelling:2 likelihood:12 contrast:1 seeger:1 inference:27 typically:2 bt:2 expressible:3 germany:1 issue:2 overall:1 classification:1 qvd:11 enrich:1 constrained:1 special:5 marginal:7 field:4 equal:1 optimising:1 broad:2 icml:2 nearly:1 unsupervised:1 wipf:1 future:1 report:2 few:1 randomly:1 simultaneously:2 divergence:4 ve:1 murphy:1 suit:1 normalisation:4 fd:3 interest:1 highly:1 investigate:1 message:1 mixture:8 hg:1 parameterising:1 chain:1 integral:12 partial:4 modest:1 discretisation:2 overcomplete:1 minfunc:1 instance:3 modeling:1 soft:1 rao:1 cover:1 lattice:13 deviation:2 lpai:2 technische:1 reported:1 kn:3 synthetic:3 nickisch:1 density:32 international:4 probabilistic:2 vm:2 invertible:2 concrete:2 again:1 central:2 squared:1 thesis:1 choose:2 bracewell:1 admit:1 book:1 resort:1 derivative:6 american:1 toy:2 potential:1 de:1 student:2 includes:3 notable:1 bg:6 depends:1 performed:3 view:1 closed:1 doing:1 zoeter:1 hf:1 slump:2 bouchard:2 contribution:2 formed:1 square:1 accuracy:2 variance:2 characteristic:2 efficiently:3 ensemble:1 spaced:1 correspond:1 then3:1 bayesian:18 parameterises:1 accurately:4 critically:2 lu:3 carlo:1 worth:1 processor:1 kuss:1 explain:1 ed:1 against:1 energy:2 minka:1 e2:3 dm:4 naturally:1 di:1 gain:1 sampled:5 dataset:5 dimensionality:1 routine:1 higher:3 dt:2 improved:3 arranged:1 evaluated:5 strongly:2 generality:1 furthermore:1 dey:1 until:1 honkela:1 nonlinear:2 logistic:11 mode:3 effect:2 true:5 y2:1 analytically:4 assigned:1 hence:1 dud:1 symmetric:1 leibler:2 amn:3 during:1 mschmidt:1 skewed:1 coincides:1 generalized:1 hill:1 l1:3 characterising:1 reasoning:1 variational:21 recently:2 sigmoid:2 multinomial:1 overview:1 itx:1 extend:2 discussed:1 tail:2 numerically:2 marginals:1 significant:2 expressing:1 ap2:1 cambridge:1 ai:35 rd:5 mathematics:2 similarly:2 hp:1 stable:6 base:25 posterior:19 multivariate:11 showed:1 own:1 apart:1 scenario:1 route:1 inequality:1 binary:1 arbitrarily:1 captured:2 seen:2 additional:4 guestrin:1 ud:19 full:1 ifft:1 desirable:1 reduces:1 smooth:8 generalises:1 technical:2 calculation:1 minimising:2 divided:1 proximate:1 impact:1 regression:19 noiseless:1 expectation:9 optimisation:4 arxiv:1 kernel:4 achieved:2 qy:9 addition:1 fine:1 interval:1 crucial:1 marginalisation:1 unlike:1 posse:3 archive:2 jordan:3 canadian:1 split:4 fft:6 cn:4 specialise:1 chebyshev:1 det:3 bottleneck:1 six:1 render:1 york:1 passing:1 matlab:1 useful:3 clear:2 mloss:1 transforms:4 nonparametric:1 s4:1 reduced:1 singapore:1 shifted:1 delta:3 discrete:2 write:1 siddhartha:1 key:1 nevertheless:1 achieving:1 r10:1 asymptotically:1 circumventing:1 padded:1 sum:2 inverse:2 package:1 extends:2 family:2 throughout:1 knowles:1 dy:2 scaling:1 comparable:1 bit:1 bound:51 ct:1 layer:1 guaranteed:1 played:2 kohl:1 constrain:1 software:1 fourier:15 integrand:1 relatively:1 department:1 combination:1 conjugate:3 kd:1 increasingly:1 em:1 partitioned:3 qu:1 dv:4 explained:1 computationally:1 equation:5 remains:1 previously:1 skew:15 needed:2 tractable:4 end:2 available:1 gaussians:2 apply:1 v2:1 alternative:1 convolved:1 thomas:1 denotes:3 remaining:1 graphical:1 maintaining:1 approximating:12 classical:3 quantity:3 spike:1 strategy:2 amongst:1 gradient:6 valpola:1 vd:15 d6:1 barber:5 cauchy:1 collected:1 maximising:1 assuming:1 discretise:3 dum:1 index:1 length:1 ratio:1 difficult:2 unfortunately:1 hlog:5 sinica:1 steel:1 implementation:1 perform:1 allowing:1 upper:1 av:4 convolution:3 datasets:7 markov:1 enabling:1 finite:1 defining:2 situation:1 precise:1 expressiveness:1 david:2 introduced:2 pair:3 required:2 kl:40 specified:1 khan:1 nip:8 address:1 beyond:2 dth:1 able:1 below:3 sparsity:3 challenge:1 iqy:1 including:2 optimise:4 memory:1 belief:1 wainwright:1 greatest:1 nth:1 lk:12 negativity:1 wtrue:1 prior:12 understanding:1 l2:2 determining:1 graf:1 loss:1 fully:2 permutation:1 mixed:2 interesting:1 var:1 versus:1 foundation:1 incurred:1 affine:6 consistent:1 heavy:6 rasmussen:1 normalised:1 allow:1 taking:3 sparse:8 distributed:4 ghz:1 calculated:4 xn:10 evaluating:2 opper:1 concavity:1 collection:1 commonly:1 universally:1 simplified:1 reinforcement:1 employing:1 approximate:18 compact:1 preferred:1 kullback:2 mcgraw:1 confirm:1 ml:2 kreutz:1 assumed:1 xi:1 continuous:1 latent:4 tailed:5 table:1 sahu:1 robust:7 improving:1 permute:1 complex:1 constructing:1 domain:5 ntst:3 aistats:3 statistica:1 bounding:7 noise:4 arise:1 complementary:1 crgtr:1 site:10 intel:1 en:1 wiley:1 exponential:2 levy:1 weighting:2 lpg:2 formula:1 theorem:1 xt:2 bishop:2 showing:1 dk:3 dominates:4 naively:1 burden:1 effectively:1 phd:1 budget:1 boston:3 entropy:6 logarithmic:1 univariate:4 cheaply:2 saturating:1 doubling:1 corresponds:2 change:1 uniformly:1 birkhauser:1 wt:11 experimental:1 la:1 exception:2 select:1 college:1 cholesky:1 arises:1 assessed:1 ongoing:1 evaluate:5 correlated:1 |
3,881 | 4,513 | Predicting Action Content On-Line and in
Real Time before Action Onset ? an
Intracranial Human Study
Shengxuan Ye
California Institute of Technology
Pasadena, CA
[email protected]
Uri Maoz
California Institute of Technology
Pasadena, CA
[email protected]
Ian Ross
Huntington Hospital
Pasadena, CA
[email protected]
Adam Mamelak
Cedars-Sinai Medical Center
Los Angeles, CA
[email protected]
Christof Koch
California Institute of Technology
Pasadena, CA
Allen Institute for Brain Science
Seattle, WA
[email protected]
Abstract
The ability to predict action content from neural signals in real time before the action occurs has been long sought in the neuroscientific study of decision-making,
agency and volition. On-line real-time (ORT) prediction is important for understanding the relation between neural correlates of decision-making and conscious,
voluntary action as well as for brain-machine interfaces. Here, epilepsy patients,
implanted with intracranial depth microelectrodes or subdural grid electrodes for
clinical purposes, participated in a ?matching-pennies? game against an opponent.
In each trial, subjects were given a 5 s countdown, after which they had to raise
their left or right hand immediately as the ?go? signal appeared on a computer
screen. They won a fixed amount of money if they raised a different hand than
their opponent and lost that amount otherwise. The question we here studied was
the extent to which neural precursors of the subjects? decisions can be detected in
intracranial local field potentials (LFP) prior to the onset of the action.
We found that combined low-frequency (0.1?5 Hz) LFP signals from 10 electrodes
were predictive of the intended left-/right-hand movements before the onset of the
go signal. Our ORT system predicted which hand the patient would raise 0.5 s
before the go signal with 68?3% accuracy in two patients. Based on these results,
we constructed an ORT system that tracked up to 30 electrodes simultaneously,
and tested it on retrospective data from 7 patients. On average, we could predict
the correct hand choice in 83% of the trials, which rose to 92% if we let the system
drop 3/10 of the trials on which it was less confident. Our system demonstrates?
for the first time?the feasibility of accurately predicting a binary action on single
trials in real time for patients with intracranial recordings, well before the action
occurs.
1
1
Introduction
The work of Benjamin Libet [1, 2] and others [3, 4] has challenged our intuitive notions of the relation between decision making and conscious voluntary action. Using electrocorticography (EEG),
these experiments measured brain potentials from subjects that were instructed to flex their wrist at a
time of their choice and note the position of a rotating dot on a clock when they felt the urge to move.
The results suggested that a slow cortical wave measured over motor areas?termed ?readiness potential? [5], and known to precede voluntary movement [6]?begins a few hundred milliseconds before the average reported time of the subjective ?urge? to move. This suggested that action onset and
contents could be decoded from preparatory motor signals in the brain before the subject becomes
aware of an intention to move and of the contents of the action. However, the readiness potential
was computed by averaging over 40 or more trials aligned to movement onset after the fact. More
recently, it was shown that action contents can be decoded using functional magnetic-resonance
imaging (fMRI) several seconds before movement onset [7]. But, while done on a single-trial basis,
decoding the neural signals took place off-line, after the experiment was concluded, as the sluggish
nature of fMRI hemodynamic signals precluded real-time analysis. Moreover, the above studies
focused on arbitrary and meaningless action?purposelessly raising the left or right hand?while
we wanted to investigate prediction of reasoned action in more realistic, everyday situations with
consequences for the subject.
Intracranial recordings are good candidates for single-trial, ORT analysis of action onset and contents [8, 9], because of the tight temporal pairing of LFP to the underlying neuronal signals. Moreover, such recordings are known to be cleaner and more robust, with signal-to-noise ratios up to
100 times larger than surface recordings like EEG [10, 11]. We therefore took advantage of a rare
opportunity to work with epilepsy patients implanted with intracranial electrodes for clinical purposes. Our ORT system (Fig. 1) predicts, with far above chance accuracy, which one of two future
actions is about to occur on this one trial and feeds the prediction back to the experimenter, all
before the onset of the go signal that triggers the patient?s movement (see Experimental Methods).
We achieve relatively high prediction performance using only part of the data?learning from brain
activity in past trials only (Fig. 2) to predict future ones (Fig. 3)?while still running the analysis
quickly enough to act upon the prediction before the subject moved.
2
2.1
Experimental Methods
Subjects
Subjects in this experiment were 8 consenting intractable epilepsy patients that were implanted with
intracranial electrodes as part of their presurgical clinical evaluation (ages 18?60, 3 males). They
were inpatients in the neuro-telemetry ward at the Cedars Sinai Medical Center or the Huntington
Memorial Hospital, and are designated with CS or HMH after their patient numbers, respectively. Six
of them?P12CS, P15CS, P22CS and P29?31HMH were implanted with intracortical depth electrodes targeting their bilateral anterior-cingulate cortex, amygdala, hippocampus and orbitofrontal
cortex. These electrodes had eight 40 ?m microwires at their tips, 7 for recording and 1 serving as
a local ground. Two patients, P15CS and P22CS, had additional microwires in the supplementary
motor area. We utilized the LFP recorded from the microwires in this study. Two other patients,
P16CS and P19CS, were implanted with an 8?8 subdural grid (64 electrodes) over parts of their
temporal and prefrontal dorsolateral cortices. The data of one patient?P31HMH?was excluded
because microwire signals were too noisy for meaningful analysis. The institutional review boards
of Cedars Sinai Medical Center, the Huntington Memorial Hospital and the California Institute of
Technology approved the experiments.
During the experiment, the subject sat in a hospital bed in a semi-inclined ?lounge chair? position.
The stimulus/analysis computer (bottom left of Fig. 4) displaying the game screen (bottom right
inset of Fig. 4) was positioned to be easily viewable for the subject. When playing against the
experimenter, the latter sat beside the bed. The response box was placed within easy reach of the
subject (Fig. 4).
2
2.2
Experiment Design
As part of our focus on purposeful, reasoned action, we had the subjects play a matching-pennies
game?a 2-choice version of ?rock paper scissors??either against the experimenter or against a
computer. The subjects pressed down a button with their left hand and another with their right on a
response box. Then, in each trial, there was a 5 s countdown followed by a go signal, after which
they had to immediately lift one of their hands. It was agreed beforehand that the patient would win
the trial if she lifted a different hand than her opponent, and lose if she raised the same hand as her
opponent. Both players started off with a fixed amount of money, $5, and in each trial $0.10 was
deducted from the loser and awarded to the winner. If a player lifted her hand before the go signal,
did not lift her hand within 500 ms of the go signal, or lifted no hand or both hands at the go signal?
an error trial?she lost $0.10 without her opponent gaining any money. The subjects were shown the
countdown, the go signal, the overall score, and various instructions on a stimulus computer placed
before them (Fig. 4). Each game consisted of 50 trials. If, at the end of the game, the subject had
more money than her opponent, she received that money in cash from the experimenter.
Before the experimental session began, the experimenter explained the rules of the game to the subject, and she could practice playing the game until she was familiar with it. Consequently, patients
usually made only few errors during the games (<6% of the trials). Following the tutorial, the subject played 1?3 games against the computer and then once against the experimenter, depending on
their availability and clinical circumstances. The first 2 games of P12CS were removed because
the subject tended to constantly raise the right hand regardless of winning or losing. Two patients,
P15CS and P19CS, were tested in actual ORT conditions. In such sessions?3 games each?the
subjects always played against the experimenter. These ORT games were different from the other
games in two respects. First, a computer screen was placed behind the patient, in a location where
she could not see it. Second, the experimenter was wearing earphones (Fig. 1,4). Half a second before go-signal onset, an arrow pointing towards the hand that the system predicted the experimenter
had to raise to win the trial was displayed on that screen. Simultaneously, a monophonic tone was
played in the experimenter?s earphone ipsilateral to that hand. The experimenter then lifted that hand
at the go signal (see Supplemental Movie).
Cheetah Machine
Collect
and save
data
Patient
with intracranial electrodes
Down
sampling
Buffer
1Gbps
Router
TTL Signal
The winner is
Player 1
PLAYER 1 PLAYER 2
SCORE 1
Analysis/stimulus machine
SCORE 2
Response Box Game Screen
/
Experimenter
Result
Interpreta
tion
Analysis
Filtering
Display/Sound
Figure 1: A schematic diagram of the on-line real-time (ORT) system. Neural signals flow from
the patient through the Cheetah machine to the analysis/stimulus computer, which controls the input
and output of the game and computes the prediction of the hand the patient would raise at the go
signal. It displays it on a screen behind the patient and informs the experimenter which hand to raise
by playing a tone in his ipsilateral ear using earphones.
3
3
3.1
The real-time system
Hardware and software overview
?V
?V
?V
Neural data from the intracranial electrodes were transferred to a recording system (Neuralynx,
Digital Lynx), where it was collected and saved to the local Cheetah machine, down sampled
from 32 kHz to 2 kHz and buffered. The data were then transferred, through a dedicated 1 Gbps
local-area network, to the analysis/stimulus machine. This computer first band-pass-filtered the
data to the 0.1?5 Hz range (delta and lower theta bands) using a second-order zero-lag elliptic
filter with an attenuation of 40 dB (cf. Figs. 2a and 2b). We found that this frequency range?
generally comparable to that of the readiness potential?resulted in optimal prediction performance.
It then ran the analysis algorithm (see below) on the filtered data. This computer also controlled
the game screen, displaying the names of the players, their current scores and various instructions.
The analysis/stimulus computer further
controlled the response box, which con- (a)
800
sisted of 4 LED-lit buttons. The buttons of the subject and her opponent
600
flashed red or blue whenever she or her
?5
?4
?3
?2
?1
0
opponent won, respectively. Addition(b)100
ally, the analysis/stimulus computer sent
0
a unique transistor-transistor logic (TTL)
?100
?200
pulse whenever the game screen changed
?5
?4
?3
?2
?1
0
or a button was pressed on the response
box, which synchronized the timing of (c) 100
0
these events with the LFP recordings.
?100
In real-time game sessions, the analy?200
?5
?4
?3
?2
?1
0
sis/stimulus computer also displayed the
appropriate arrow on the computer screen (d) 1
behind the subject and played the tone
0
to the appropriate ear of the experimenter
?1
0.5 s before go-signal onset (Figs. 1,4).
?5
?4
?3
?2
?1
0
The analysis software was based on a
machine-learning algorithm that trained
on past-trials data to predict the current
trial and is detailed below. The training phase included the first 70% of the
trials, with the prediction carried out on
the remaining 30% using the trained parameters, together with an online weighting system (see below). The system examined only neural activity, and had no
access to the subject?s left/right-choice
history. After filtering all the training
trials (Fig. 2b), the system found the
mean and standard error over all leftward
and rightward training trials, separately
(Fig. 2c, left designated in red). It then
found the electrodes and time windows
where the left/right separation was high
(Fig. 2d,e; see below), and trained the classifiers on these time windows (Fig. 2f?g).
The best electrode/time-window/classifier
(ETC) combinations were then used to
predict the current trial in the prediction
phase (Fig. 3). The number of ETCs that
can be actively monitored is currently limited to 10 due to the computational power
of the real-time system.
El 49?T1
(e)
El 49?T2
El 49?T3
1
0
?1
?5
?4
?3
?2
?1
Countdown to go signal at t=0 (seconds)
0
(f)
Classifier
Cf1
Classifier
Cf2
...
Classifier
Cf6
El 49?T1?Cf1
El 49?T1?Cf2
El 49?T1?Cf6
...
El 49?T2?Cf1
El 49?T2?Cf2
El 49?T2?Cf6
El 49?T3?Cf1
El 49?T3?Cf2
El 49?T3?Cf6
(g)
Combination
El49-T1-Cf2
Combination
El49-T2-Cf2
...
Combination
El49-T2-Cf6
Figure 2: The ORT-system?s training phase. Left (in
red) and right (in blue) raw signals (a) are low-pass filtered (b). Mean?standard errors of signals preceeding left- and right-hand movments (c) are used to compute a left/right separability index (d), from which time
windows with good separation are found (e). Seven
classifiers are then applied to all the time windows (f)
and the best electrode/time-window/classifier combinations are selected (g) and used in the prediction phase
(Fig. 3).
4
?V
100
0
?100
?200
?5
?4
?3
?2
?1
0
Trained classifiers
Combination
E l 49?T1?Cf2
Combination
E l 49?T2?Cf2
Weight = 1
Weight = 1
Combination
E l 49?T2?Cf6
&
Weight = 1
Predicted result
L
L
L
L
L
R
L
&
R
L
Real result
Adjust the weights
L
==
Figure 3: The ORT-system?s prediction phase. A new signal?from 5 to 0.5 seconds before the
go signal?is received in real time, and each electrode/time-window/classifier combination (ETC)
classifies it as resulting in left- or right-hand movement. These predictions are then compared to the
actual hand movement, with the weights associated with ETCs that correctly (incorrectly) predicted
increasing (decreasing).
3.2
Computing optimal left/right-separating time windows
The algorithm focused on finding the time windows with the best left/right separation for the different recording electrodes over the training set (Fig. 2c?e). That is, we wanted to predict whether
the signal aN (t) on trial N will result in a leftward or rightward movement?i.e., whether the label of the N th trial will be Lt or Rt, respectively. For each electrode, we looked at the N ? 1
previous trials a1 (t), a2 (t), . . . , aN ?1 (t), and their associated labels as l1 , l2 , . . . , lN ?1 . Now, let
N ?1
?1
L(t) = {ai (t) | li = Lt}N
i=1 and R(t) = {ai (t) | li = Rt}i=1 be the set of previous leftward and
rightward trials in the training set, respectively. Furthermore, let Lm (t) (Rm (t)) and Ls (t) (Rs (t))
be the mean and standard error of L(t) (R(t)), respectively. We can now define the normalized
relative left/right separation for each electrode at time t (see Fig. 2d):
?
[Lm (t) ? Ls (t)] ? [Rm (t) + Rs (t)]
?
?
if [Lm (t) ? Ls (t)] ? [Rm (t) + Rs (t)] > 0
?
?
Lm (t) ? Rm (t)
?
?
?
?
?
[Rm (t) ? Rs (t)] ? [Lm (t) + Ls (t)]
?(t) =
?
if [Rm (t) ? Rs (t)] ? [Lm (t) + Ls (t)] > 0
?
?
?
Rm (t) ? Lm (t)
?
?
?
?
?
?
0
otherwise
Thus, ?(t) > 0 (?(t) < 0) means that the leftward trials tend to be considerably higher (lower)
than rightward trials for that electrode at time t, while ?(t) = 0 suggests no left/right separation at
time t. We define a consecutive time period of |?(t)| > 0 for t < prediction time (the time before
the go signal when we want the system to output a prediction; -0.5 s for the ORT trials) as a time
window (Fig. 2e). After all time windows are found for all electrodes, time windows lessRthan M ms
t
apart are combined into one. Then, for each time window from t1 to t2 we define a = t12 |?(t)|dt.
We then eliminate all time windows satisfying a < A. We found the values M = 200 ms and
A = 4, 500 ?V ? ms to be optimal for real-time analysis. This resulted in 20?30 time windows over
all 64 electrodes that we monitored.
5
1
$4.80
$5.20
P15CS
Uri
Figure 4: The experimental setup in the clinic. At 400 ms before the go signal, the patient and
experimenter are watching the game screen (inset on bottom right) on the analysis/stimulus computer
(bottom left) and still pressing down the buttons of the response box. The realtime system already
computed a prediction, and thus displays an arrow on the screen behind the patient and plays a tone
in the experimenter?s ear ipsilateral to the hand it predicts he should raise to beat the patient (see
Supplemental Movie).
3.3
Classifiers selection and ETC determination
We used ensemble learning with 7 types of relatively simple binary classifiers (due to real-time
processing considerations) on every electrode?s time windows (Fig. 2f). Classifiers A to G would
classify aN (t) as Lt if:
P
P
P
(A) Defining aN,M , Lm,M and Rm,M as aN (t), Lm (t) and Rm (t) over time window M ,
(i) sign Rm,M 6= sign aN,M = sign Lm,M , or
(ii) sign Rm,M = sign aN,M = sign Lm,M and Lm,M > Rm,M , or
(iii) sign Rm (t) 6= sign SN,M 6= sign Lm (t) and Lm,M < Rm,M ;
(B) mean aN (t) ? mean Lm (t) < mean aN (t) ? mean Rm (t) ;
(C) median aN (t) ? median Lm (t) < median aN (t) ? median Rm (t) over the time
window;
(D) aN (t) ? Lm (t)L2 < aN (t) ? Rm (t)L2 over the time window;
(E) aN (t) is convex/concave like Lm (t) while Rm (t) is concave/convex, respectively;
(F) Linear support-vector machine (SVM) designates it as so; and
(G) k-nearest neighbors (KNN) with Euclidean distance designates it as so.
Each classifier is optimized for certain types of features. To estimate how well its classification
would generalize from the training to the test set, we trained and tested it using a 70/30 crossvalidation procedure within the training set. We tested each classifier on every time window of every
electrode, discarding those with accuracy <0.68, which left 12.0 ? 1.6% of the original 232 ? 18
ETCs, on average (?standard error). The training phase therefore ultimately output a set of S binary
ETC combinations (Fig. 2g) that were used in the prediction phase (Fig. 3).
3.4
The prediction-phase weighting system
In the prediction phase, each of the overall S binary ETCs calculates a prediction, ci ? {?1, 1} (for
right and left, respectively), independently at the desired prediction time. All classifiers are initially
6
PS
given the same weight, w1 = w2 = ? ? ? = wS = 1. We then calculate ? = i=1 wi ? ci and predict
left (right) if ? > d (? < ?d), or declare it an undetermined trial if ?d < ? < d. Here d is the
drop-off threshold for the prediction. Thus the larger d is, the more confident the system needs to be
to make a prediction, and the larger the proportion of trials on which the system abstains?the dropoff rate. Weight wi associated with ETCi is increased (decreased) by 0.1 whenever ETCi predicts
the hand movement correctly (incorrectly). A constantly erring ETC would therefore be associated
with an increasingly small and then increasingly negative weight.
3.5
Implementation
The algorithm was implemented in MATLAB 2011a (MathWorks, Natick, MA) as well as in C++
on Visual Studio 2008 (Microsoft, Redmond, WA) for enhanced performance. The neural signals
were collected by the Digital Lynx S system using Cheetah 5.4.0 (Neuralynx, Redmond, WA). The
simulated-ORT system was also implemented in MATLAB 2011a. The simulated-ORT analyses
carried out in this paper used real patient data saved on the Digital Lynx system.
1
0.9
Drop rate:
None
0.18
0
Prediction accuracy
0.8
0.7
Significant accuracy
(p=0.05)
0.6
0.5
?5
?4.5
?4
?3.5
?3
?2.5
Time (s)
?2
?1.5
?1
?0.5
0
Go-signal
onset
Figure 5: Across-subjects average of the prediction accuracy of simulated-ORT versus time before
the go signal. The mean accuracies over time when the system predicts on every trial, is allowed
to drop 19% or 30% of the trials, are depicted in blue, green and red, respectively (?standard error
shaded). Values above the dashed horizontal line are significant at p = 0.05.
4
Results
We tested our prediction system in actual real time on 2 patients?P15CS and P19CS (a depth
and grid patient, respectively), with a prediction time of 0.5 s before the go signal (see Supplementary Movie). Because of computational limitations, the ORT system could only track 10
electrodes with just 1 ETC per electrode in real time. For P15CS, we achieved an accuracy of
72?2% (?standard error; accuracy = number of accurately predicted trials / [total number of trials - number of dropped trials]; p = 10?8 , binomial test) without modifying the weights online during the prediction (see Section 3.4). For P19CS we did not run patient-specific training of the ORT system, and used parameter values that were good on average over previous patients instead. The prediction accuracy was significantly above chance 63?2% (?standard error; p = 7 ? 10?4 , binomial test). To understand how much we could improve our accuracy
with optimized hardware/software, we ran the simulated-ORT at various prediction times along
7
Accuracy
the 5 s countdown leading to the go signal. We further tested 3 drop-off rates?0, 0.19 and
0.30 (Fig. 5; drop-off rate = number of dropped trials / total number of trials; these resulted
from 3 drop-off thresholds?0, 0.1 and 0.2?respectively, see Section 3.4:). Running offline,
we were able to track 20?30 ETCs, which resulted in considerably higher accuracies (Figs. 5,6).
Averaged over all subjects, the accuracy rose from about 65% more than
1
4 s before the go signal to 83?92%
close to go-signal onset, depending
0.9
on the allowed drop-off rate. In particular, we found that for a predic0.8
tion time of 0.5 s before go-signal
onset, we could achieve accuracies
0.7
of 81?5% and 90?3% (?standard
error) for P15CS and P19CS, re0.6
spectively, with no drop off (Fig. 6).
Patients:
P12CS
We also analyzed the weights that
P15CS
our weighting system assigned to the
0.5
P16CS
P19CS
different ETCs. We found that the
P22CS
empirical distribution of weights to
P29HMH
0.4
P30HMH
ETCs associated with classifiers A to
G was, on average: 0.15, 0.12, 0.16,
?5 ?4.5 ?4 ?3.5 ?3 ?2.5 ?2 ?1.5 ?1 ?0.5 0
0.22, 0.01, 0.26 and 0.07, respecTime before go signal (at t=0) (seconds)
tively. This suggests that the linear
SVM and L2-norm comparisons (of
aN to Lm and Rm ) together make up Figure 6: Simulated-ORT accuracy over time for individual
nearly half of the overall weights at- patients with no drop off.
tributed to the classifiers, while the
current concave/convex measure is of
little use as a classifier.
5
Discussion
We constructed an ORT system that, based on intracranial recordings, predicted which hand a person would raise well before movement onset at accuracies much greater than chance in a competitive environment. We further tested this system off-line, which suggested that with optimized
hardware/software, such action contents would be predictable in real time at relatively high accuracies already several seconds before movement onset. Both our prediction accuracy and drop-off
rates close to movement onset are superior to those achieved before movement onset with noninvasive methods like EEG and fMRI [7, 12?14]. Importantly, our subjects played a matching pennies game?a 2-choice version of rock-paper-scissors [15]?to keep their task realistic, with minor
though real consequences, unlike the Libet-type paradigms whose outcome bears no consequences
for the subjects. It was suggested that accurate online, real-time prediction before movement onset
is key to investigating the relation between the neural correlates of decisions, their awareness, and
voluntary action [16, 17]. Such prediction capabilities would facilitate many types of experiments
that are currently infeasible. For example, it would make it possible to study decision reversals on
a single-trial basis, or to test whether subjects can guess above chance which of their action contents are predictable from their current brain activity, potentially before having consciously made up
their mind [16, 18]. Accurately decoding these preparatory motor signals may also result in earlier
and improved classification for brain-computer interfaces [13, 19, 20]. The work we present here
suggests that such ORT analysis might well be possible.
Acknowledgements
We thank Ueli Rutishauser, Regan Blythe Towel, Liad Mudrik and Ralph Adolphs for meaningful
discussions. This research was supported by the Ralph Schlaeger Charitable Foundation, Florida
State University?s ?Big Questions in Free Will? initiative and the G. Harold & Leila Y. Mathers
Charitable Foundation.
8
References
[1] B. Libet, C. Gleason, E. Wright, and D. Pearl. Time of conscious intention to act in relation to
onset of cerebral activity (readiness-potential): The unconscious initiation of a freely voluntary
act. Brain, 106:623, 1983.
[2] B. Libet. Unconscious cerebral initiative and the role of conscious will in voluntary action.
Behavioral and brain sciences, 8:529?539, 1985.
[3] P. Haggard and M. Eimer. On the relation between brain potentials and the awareness of
voluntary movements. Experimental Brain Research, 126:128?133, 1999.
[4] A. Sirigu, E. Daprati, S. Ciancia, P. Giraux, N. Nighoghossian, A. Posada, and P. Haggard.
Altered awareness of voluntary action after damage to the parietal cortex. Nature Neuroscience,
7:80?84, 2003.
[5] H. Kornhuber and L. Deecke. Hirnpotenti?alanderungen bei Willk?urbewegungen und passiven
Bewegungen des Menschen: Bereitschaftspotential und reafferente Potentiale. Pfl?ugers Archiv
European Journal of Physiology, 284:1?17, 1965.
[6] H. Shibasaki and M. Hallett. What is the Bereitschaftspotential? Clinical Neurophysiology,
117:2341?2356, 2006.
[7] C. Soon, M. Brass, H. Heinze, and J. Haynes. Unconscious determinants of free decisions in
the human brain. Nature Neuroscience, 11:543?545, 2008.
[8] I. Fried, R. Mukamel, and G. Kreiman. Internally generated preactivation of single neurons in
human medial frontal cortex predicts volition. Neuron, 69:548?562, 2011.
[9] M. Cerf, N. Thiruvengadam, F. Mormann, A. Kraskov, R. Quian Quiorga, C. Koch, and
I. Fried. On-line, voluntary control of human temporal lobe neurons. Nature, 467:1104?1108,
2010.
[10] T. Ball, M. Kern, I. Mutschler, A. Aertsen, and A. Schulze-Bonhage. Signal quality of simultaneously recorded invasive and non-invasive EEG. Neuroimage, 46:708?716, 2009.
[11] G. Schalk, J. Kubanek, K. Miller, N. Anderson, E. Leuthardt, J. Ojemann, D. Limbrick,
D. Moran, L. Gerhardt, and J. Wolpaw. Decoding two-dimensional movement trajectories
using electrocorticographic signals in humans. Journal of Neural engineering, 4:264, 2007.
[12] O. Bai, V. Rathi, P. Lin, D. Huang, H. Battapady, D. Y. Fei, L. Schneider, E. Houdayer, X. Chen,
and M. Hallett. Prediction of human voluntary movement before it occurs. Clinical Neurophysiology, 122:364?372, 2011.
[13] O. Bai, P. Lin, S. Vorbach, J. Li, S. Furlani, and M. Hallett. Exploration of computational
methods for classification of movement intention during human voluntary movement from
single trial EEG. Clinical Neurophysiology, 118:2637?2655, 2007.
[14] U. Maoz, A. Arieli, S. Ullman, and C. Koch. Using single-trial EEG data to predict laterality
of voluntary motor decisions. Society for Neuroscience, 38:289.6, 2008.
[15] C. Camerer. Behavioral game theory: Experiments in strategic interaction. Princeton University Press, 2003.
[16] J. D. Haynes. Decoding and predicting intentions. Annals of the New York Academy of Sciences, 1224:9?21, 2011.
[17] P. Haggard. Decision time for free will. Neuron, 69:404?406, 2011.
[18] J. D. Haynes. Beyond libet. In W. Sinnott-Armstrong and L. Nadel, editors, Conscious will
and responsibility, pages 85?96. Oxford University Press, 2011.
[19] A. Muralidharan, J. Chae, and D. M. Taylor. Extracting attempted hand movements from EEGs
in people with complete hand paralysis following stroke. Frontiers in neuroscience, 5, 2011.
[20] E. Lew, R. Chavarriaga, S. Silvoni, and J. R. Milln. Detection of self-paced reaching movement
intention from EEG signals. Frontiers in Neuroengineering, 5:13, 2012.
9
| 4513 |@word neurophysiology:3 trial:41 determinant:1 version:2 cingulate:1 hippocampus:1 approved:1 proportion:1 norm:1 instruction:2 pulse:1 r:5 lobe:1 pressed:2 bai:2 score:4 subjective:1 past:2 current:5 com:1 anterior:1 si:1 router:1 realistic:2 motor:5 wanted:2 drop:11 medial:1 libet:5 half:2 selected:1 guess:1 tone:4 fried:2 filtered:3 location:1 org:1 along:1 constructed:2 pairing:1 initiative:2 behavioral:2 cf1:4 preparatory:2 cheetah:4 brain:12 decreasing:1 volition:2 actual:3 little:1 precursor:1 window:20 increasing:1 becomes:1 begin:1 classifies:1 moreover:2 underlying:1 spectively:1 what:1 ttl:2 supplemental:2 finding:1 temporal:3 every:4 act:3 attenuation:1 concave:3 demonstrates:1 classifier:18 rm:19 control:2 medical:3 internally:1 christof:1 before:29 t1:7 dropped:2 giraux:1 local:4 timing:1 declare:1 consequence:3 re0:1 engineering:1 oxford:1 tributed:1 might:1 studied:1 examined:1 collect:1 suggests:3 shaded:1 limited:1 range:2 averaged:1 unique:1 flex:1 lfp:5 lost:2 wrist:1 practice:1 wolpaw:1 urge:2 procedure:1 area:3 empirical:1 significantly:1 physiology:1 matching:3 intention:5 kern:1 targeting:1 selection:1 close:2 center:3 go:25 regardless:1 l:5 convex:3 focused:2 independently:1 preceeding:1 immediately:2 rule:1 importantly:1 his:1 notion:1 annals:1 enhanced:1 trigger:1 play:2 analy:1 unconscious:3 losing:1 satisfying:1 utilized:1 predicts:5 electrocorticographic:1 bottom:4 role:1 calculate:1 t12:1 inclined:1 kornhuber:1 movement:21 removed:1 ran:2 rose:2 benjamin:1 agency:1 environment:1 predictable:2 und:2 ojemann:1 deducted:1 electrocorticography:1 ultimately:1 trained:5 raise:8 tight:1 predictive:1 upon:1 basis:2 rightward:4 easily:1 various:3 detected:1 lift:2 outcome:1 lynx:3 lag:1 larger:3 supplementary:2 whose:1 otherwise:2 ability:1 ward:1 knn:1 bereitschaftspotential:2 noisy:1 online:3 advantage:1 pressing:1 transistor:2 pfl:1 rock:2 took:2 interaction:1 deecke:1 aligned:1 loser:1 achieve:2 maoz:2 academy:1 intuitive:1 moved:1 bed:2 everyday:1 crossvalidation:1 los:1 seattle:1 electrode:24 p:1 adam:2 muralidharan:1 depending:2 informs:1 measured:2 nearest:1 minor:1 received:2 implemented:2 predicted:6 c:1 synchronized:1 correct:1 saved:2 filter:1 modifying:1 abstains:1 human:7 exploration:1 dropoff:1 neuroengineering:1 frontier:2 koch:4 klab:1 ground:1 ueli:1 wright:1 predict:8 lm:19 pointing:1 mathers:1 sought:1 institutional:1 a2:1 consecutive:1 purpose:2 gbps:2 precede:1 lose:1 label:2 currently:2 ross:1 always:1 reaching:1 cash:1 erring:1 lifted:4 focus:1 she:8 el:12 eliminate:1 initially:1 pasadena:4 w:1 relation:5 her:8 ralph:2 overall:3 classification:3 resonance:1 raised:2 field:1 aware:1 once:1 having:1 reasoned:2 sampling:1 haynes:3 lit:1 nearly:1 fmri:3 future:2 others:1 stimulus:9 t2:9 viewable:1 few:2 microelectrodes:1 simultaneously:3 resulted:4 individual:1 familiar:1 intended:1 phase:9 microsoft:1 detection:1 inpatient:1 investigate:1 evaluation:1 adjust:1 male:1 analyzed:1 behind:4 accurate:1 beforehand:1 euclidean:1 taylor:1 rotating:1 desired:1 increased:1 classify:1 earlier:1 challenged:1 strategic:1 cedar:3 rare:1 undetermined:1 hundred:1 too:1 reported:1 gerhardt:1 considerably:2 combined:2 confident:2 person:1 off:11 decoding:4 tip:1 together:2 quickly:1 w1:1 recorded:2 ear:3 huang:1 prefrontal:1 watching:1 leading:1 ullman:1 actively:1 li:3 potential:7 de:1 intracortical:1 availability:1 scissors:2 onset:19 bilateral:1 tion:2 responsibility:1 red:4 wave:1 competitive:1 capability:1 telemetry:1 accuracy:19 lew:1 ensemble:1 t3:4 miller:1 camerer:1 generalize:1 consciously:1 raw:1 accurately:3 none:1 trajectory:1 subdural:2 history:1 stroke:1 reach:1 tended:1 whenever:3 against:7 frequency:2 invasive:2 associated:5 monitored:2 con:1 sampled:1 experimenter:16 gleason:1 agreed:1 positioned:1 back:1 feed:1 higher:2 dt:1 response:6 improved:1 done:1 box:6 though:1 anderson:1 furthermore:1 just:1 clock:1 until:1 hand:28 ally:1 horizontal:1 readiness:4 heinze:1 quality:1 menschen:1 name:1 facilitate:1 ye:1 consisted:1 normalized:1 laterality:1 p29:1 assigned:1 excluded:1 flashed:1 game:21 during:4 self:1 harold:1 won:2 m:5 complete:1 allen:1 interface:2 dedicated:1 l1:1 consideration:1 recently:1 began:1 superior:1 bonhage:1 functional:1 tracked:1 overview:1 tively:1 winner:2 khz:2 cerebral:2 schulze:1 he:1 epilepsy:3 buffered:1 significant:2 mormann:1 haggard:3 ai:2 grid:3 session:3 had:8 dot:1 access:1 cortex:5 money:5 surface:1 ort:20 etc:6 intracranial:10 leftward:4 awarded:1 apart:1 termed:1 buffer:1 certain:1 initiation:1 binary:4 caltech:3 additional:1 greater:1 schneider:1 freely:1 paradigm:1 period:1 signal:44 semi:1 ii:1 dashed:1 earphone:3 sound:1 memorial:2 determination:1 clinical:7 long:1 lin:2 a1:1 feasibility:1 schematic:1 prediction:33 neuro:1 controlled:2 calculates:1 implanted:5 patient:30 circumstance:1 nadel:1 natick:1 achieved:2 addition:1 want:1 participated:1 separately:1 decreased:1 diagram:1 median:4 concluded:1 w2:1 meaningless:1 unlike:1 subject:27 hz:2 recording:9 db:1 sent:1 tend:1 flow:1 extracting:1 kraskov:1 iii:1 enough:1 easy:1 blythe:1 etcs:7 angeles:1 whether:3 six:1 quian:1 retrospective:1 york:1 action:22 matlab:2 generally:1 detailed:1 cerf:1 cleaner:1 amount:3 conscious:5 band:2 hardware:3 hallett:3 millisecond:1 tutorial:1 sign:9 delta:1 neuroscience:4 correctly:2 ipsilateral:3 track:2 serving:1 blue:3 per:1 key:1 threshold:2 purposeful:1 imaging:1 button:5 run:1 place:1 separation:5 realtime:1 decision:9 dorsolateral:1 orbitofrontal:1 comparable:1 followed:1 played:5 display:3 paced:1 activity:4 kreiman:1 occur:1 fei:1 software:4 huntington:3 felt:1 archiv:1 chair:1 relatively:3 transferred:2 designated:2 arieli:1 combination:10 ball:1 across:1 increasingly:2 separability:1 wi:2 making:3 explained:1 ln:1 mathworks:1 microwires:3 mind:1 end:1 reversal:1 opponent:8 eight:1 elliptic:1 appropriate:2 magnetic:1 adolphs:1 save:1 florida:1 original:1 binomial:2 running:2 cf:1 remaining:1 opportunity:1 schalk:1 society:1 move:3 question:2 already:2 occurs:3 looked:1 damage:1 rt:2 aertsen:1 win:2 distance:1 thank:1 separating:1 simulated:5 seven:1 extent:1 collected:2 index:1 ratio:1 setup:1 potentially:1 negative:1 neuroscientific:1 design:1 implementation:1 neuron:4 displayed:2 incorrectly:2 voluntary:12 situation:1 beat:1 defining:1 parietal:1 arbitrary:1 princeton:1 optimized:3 raising:1 california:4 pearl:1 able:1 suggested:4 precluded:1 usually:1 below:4 redmond:2 beyond:1 appeared:1 gaining:1 green:1 power:1 event:1 predicting:3 improve:1 movie:3 technology:4 theta:1 altered:1 started:1 carried:2 sn:1 prior:1 understanding:1 review:1 l2:4 acknowledgement:1 relative:1 beside:1 bear:1 regan:1 limitation:1 filtering:2 versus:1 age:1 digital:3 clinic:1 foundation:2 awareness:3 rutishauser:1 displaying:2 editor:1 charitable:2 playing:3 changed:1 placed:3 supported:1 free:3 soon:1 sinai:3 infeasible:1 offline:1 preactivation:1 understand:1 institute:5 neighbor:1 penny:3 depth:3 cortical:1 amygdala:1 noninvasive:1 computes:1 instructed:1 made:2 far:1 correlate:2 countdown:5 cf6:6 logic:1 keep:1 investigating:1 paralysis:1 sat:2 mamelak:2 designates:2 nature:4 robust:1 ca:5 aol:1 eeg:8 european:1 monophonic:1 did:2 arrow:3 big:1 noise:1 allowed:2 neuronal:1 fig:25 screen:11 board:1 slow:1 neuroimage:1 position:2 decoded:2 winning:1 leuthardt:1 candidate:1 weighting:3 bei:1 ian:1 down:4 discarding:1 specific:1 inset:2 moran:1 svm:2 intractable:1 ci:2 mukamel:1 sluggish:1 uri:2 studio:1 chen:1 depicted:1 led:1 lt:3 cf2:8 visual:1 chance:4 constantly:2 ma:1 towel:1 consequently:1 towards:1 content:8 included:1 averaging:1 total:2 hospital:4 pas:2 experimental:5 player:6 attempted:1 meaningful:2 support:1 people:1 latter:1 frontal:1 wearing:1 hemodynamic:1 armstrong:1 tested:7 |
3,882 | 4,514 | Risk Aversion in Markov Decision Processes
via Near-Optimal Chernoff Bounds
Pieter Abbeel
Department of Computer Science
University of California at Berkeley
Berkeley CA 94720, USA
[email protected]
Teodor Mihai Moldovan
Department of Computer Science
University of California at Berkeley
Berkeley CA 94720, USA
[email protected]
Abstract
The expected return is a widely used objective in decision making under uncertainty. Many algorithms, such as value iteration, have been proposed to optimize
it. In risk-aware settings, however, the expected return is often not an appropriate
objective to optimize. We propose a new optimization objective for risk-aware
planning and show that it has desirable theoretical properties. We also draw connections to previously proposed objectives for risk-aware planing: minmax, exponential utility, percentile and mean minus variance. Our method applies to an
extended class of Markov decision processes: we allow costs to be stochastic as
long as they are bounded. Additionally, we present an efficient algorithm for optimizing the proposed objective. Synthetic and real-world experiments illustrate
the effectiveness of our method, at scale.
1
Introduction
The expected return is often the objective function of choice in planning problems where outcomes
not only depend on the actor?s decisions but also on random events. Often expectations are the
natural choice, as the law of large numbers guarantees that the average return over many independent
runs will converge to the expectation. Moreover, the linearity of expectations can often be leveraged
to obtain efficient algorithms.
Some games, however, can only be played once, either because they take a very long time (investing
for retirement), because we are not given a chance to try again if we lose (skydiving, crossing
the road), or because i.i.d. versions of the game are not available (stock market). In this setting,
we can no longer take advantage of the law of large numbers to ensure that the return is close
to its expectation with high probability, so the expected return might not be the best objective to
optimize. If we were pessimistic, we might assume that everything that can go wrong will go wrong
and try to minimize the losses under this assumption. This is called minmax optimization and is
sometimes useful, but, most often, the resulting policies are overly cautious. A more balanced and
general approach would include minmax optimization and expectation optimization, corresponding
respectively to absolute risk aversion and risk ignorance, but would also allow a spectrum of policies
between these extremes.
As a motivating example, consider buying tickets to fly to a very important meeting. Shorter travel
time is preferable, but even more importantly, it would be disastrous if you arrived late. Some flights
arrive on time more often than others, and the delays might be amplified if you miss connecting
flights. With these risks in mind, would you rather take a route with an expected travel time of 12:21
and no further guarantees, or would you prefer a route that takes less than 16:19 with 99% probability? Our method produces these options when traveling from Shreveport Regional Airport (SHV) to
Rafael Hern?andez Airport (BQN). According to historical flight data, if you chose the former alter1
native you could end up travelling for 22 hours with 8% probability. Another example comes from
software quality assurance. Amazon.com requires its sub-services to report and optimize performance at the 99.9th percentile, rather than in expectation, to make sure that all of its customers have
a good experience, not just the majority [1]. In the economics literature, this percentile criterion
is known as value at risk and has become a widely used measure of risk after the market crash of
1987 [2]. At the same time, the classical method for managing risk in investment is Markovitz portfolio optimization where the objective is to optimize expectation minus weighted variance. These
examples suggest that proper risk-aware planning should allow a trade-off between expectation and
variance, and, at the same time, should provide some guarantees about the probability of failure.
Risk-aware planning for Markov decision processes (MDPs) is difficult for two main reasons. First,
optimizing many of the intuitive risk-aware objectives seems to be intractable computationally. Both
mean minus variance optimization and percentile optimization for MDPs have been shown to be
NP-hard in general [3, 4]. Consequently, we can only optimize relaxations of these objectives in
practice. Second, it seems to be difficult to find an optimization objective which correctly models
our intuition of risk awareness. Even though expectation, variance and percentile levels relate to
risk awareness, optimizing them directly can lead to counterintuitive policies as illustrated recently
in [3], for the case of mean minus variance optimization, and in the appendix of this paper, for
percentile optimization.
Planning under uncertainty in MDPs is an old topic that has been addressed by many authors. The
minmax objective has been proposed in [5, 6], which propose a dynamic programming algorithm for
optimizing it efficiently. Unfortunately, minmax policies tend to be overly cautious. A number of
methods have been proposed for relaxations of mean minus variance optimization [3, 7]. Percentile
optimization has been shown to be tractable when dealing with ambiguity in MDP parameters [8, 9],
and it has also been discussed in the context of risk [10, 11]. Our approach is closest to the line of
work on exponential utility optimization [12, 13]. This problem can be solved efficiently and the
resulting policies conform to our intuition of risk awareness. However, previous methods give no
guarantees about probability of failure or variance. For an overview of previously used objectives
for risk-aware planning in MDPs, see [14, 15].
Our method arises from approaching the problem in the context of probability theory. We observe
connections between exponential utility maximization, Chernoff bounds, and cumulant generating
functions, which enables formulating a new optimization objective for risk-aware planning. This
new objective is essentially a re-parametrization of exponential utility, and inherits both the efficient optimization algorithms and the concordance to intuition about risk awareness. We show that
optimizing the proposed objective includes, as limiting cases, both minmax and expectation optimization and allows interpolation between them. Additionally, we provide guarantees at a certain
percentile level, and show connections to mean minus variance optimization.
Two experiments, one synthetic and one based on real-world data, support our theoretical guarantees and showcase the proposed optimization algorithms. Our largest MDP has 124791 state-action
pairs?significantly larger than experiments in most past work on risk-aware planning. Our experiments illustrate the ability of our approach to?out of the exponentially many policies available?
produce a family of policies that agrees with the human intuition of varying risk.
2
Background and Notation
An MDP consists of a state space S, an action space A, state transition dynamics, and a cost function
G. Assume that, at time t, the system is in state st ? S. Once the player chooses an action at ? A,
the system transitions stochastically to state st+1 ? S, with probability p(st+1 |st , at ), and the
player incurs a stochastic cost of Gt (st , at , st+1 ). The process continues for a number of time steps,
h, called the horizon. We eventually care about the total cost obtained. We represent the player?s
strategy as a time dependent policy, which is a measure on the space of state-actions. Finally, we
set the starting state to some fixed s0 ? S. Then, the objective is to ?optimize? the random variable
Ph?1
J h , defined by J h := t=0 Gt (St , At , St+1 ). Traditionally, ?optimizing? J means minimizing its
expected value, that is solving min? Es,? [J]. The classical solution to this problem is to run value
2
iteration, summarized below:
X
q t+1 (s, a) :=
ps0 |s,a Gts,a,s0 + j t (s0 ) ,
s0
j t (s) := min q t (s, a) = min Es,? [J t ]
a
?
We will refer to policies obtained by standard value iteration as expectimin policies. We use upper
case letters for random variables. We assume that the state-action space is finite and that sums with
zero terms, for example J 0 , are equal to zero. The notation Es,? signifies taking the expectation
starting from S0 = s, and following policy ?. We assume that costs are upper bounded, that is there
exists jM such that J ? jM almost surely for any start state and any policy, and that the expected
costs are finite. Finally, in this paper we will not consider discounting explicitly. If necessary,
discounting can be introduced in one of two ways: either by adding a transition from every state,
for all actions, to an absorbing ?end game? state, with probability ?, or by setting a time dependent
cost as Gtnew = ? t Gtold . Note that these two ways of introducing discounting are equivalent when
optimizing the expected cost, but they can differ in the risk-aware setting we are considering. We
refer the reader to [16] and [17] for further background on MDPs.
3
The Chernoff Functional as Risk-Aware Objective
We propose optimizing the following functional of the cost, which we call the Chernoff functional
since it often appears in proving Chernoff bounds:
h
i
?
Cs,?
[J] = inf ? log Es,? eJ/? ? ? log(?) .
(1)
?>0
First, note the total cost appears in the expression of the Chernoff functional as an exponential utility
(Es,? [eJ/? ]). This shows that there is a strong connection between our method and exponential
utility optimization. Specifically, all policies proposed by our algorithm, including the final solution,
are optimal policies with respect to the exponential utility for some parameter. These policies are
known to show risk-awareness in practice [12, 13], and our method inherits this property. In some
sense, our proposed objective is a re-parametrization of exponential utility, which was obtained
through its connections to Chernoff bounds and cumulant generating functions. The theorem below,
which is one of the main contributions of this paper, provides more reasons for optimizing the
Chernoff functional in the risk-aware setting. We will state and discuss the theorem here, but leave
the proof for the appendix.
Theorem 1. Let ? ? [0, 1], and let J be a random variable that has a cumulant generating function,
that is E exp(J/?) < ? for all ? > 0. Then, the Chernoff functional of this random variable,
C ? [J], is well defined, and has the following properties:
(i) P (J ? C ? [J]) ? ?
(ii) C 1 [J] = lim??? ? log E[eJ/? ] = E[J]
(iii) C 0 [J] := lim??0 C ? [J] = lim??0 ? log E[eJ/? ] = sup{j : P {J ? j} > 0} < ?.
p
(iv) C ? [J] = E[J] + 2 log(1/?)Var[J] if J is Gaussian.
p
(v) As ? ? 1, C ? [J] ? E[J] + 2 log(1/?)Var[J]
(vi) C ? [J] is a smooth, decreasing function of ?.
Proof sketch. Property (i) is simply a Chernoff bound and follows by applying Markov?s inequality
to the random variable eJ/? . Property (iv) follows from the fact that all but the first two cumulants of
Gaussian random variables are zero [18]. Properties (ii), (iii), (v) and (vi) follow from the following
properties of cumulant generating function, log EezJ , [18]:
P?
(a) log EezJ = i=1 z i ki /i! where ki are the cumulants [18], e.g. k1 = E[J], k2 = Var[J].
(b) log EezJ as a function of z ? R is strictly convex, analytic and infinitely differentiable in a
neighborhood of zero, if it is finite in that neighborhood.
3
Minimax cost
exact (f )
approximate (f?)
Expectimin cost
?
Figure 1: Plot showing the exact function f defined in Equation 2 and the approximation that our
algorithm constructs f? for the Grid World MDP described in Section 5.1.
Properties (ii) and (iii) show that we can use the ? parameter to interpolate between the nominal
policy, which ignores risk, at ? = 1, and the minmax policy, which corresponds to extreme risk
aversion, at ? = 0. Property (i) shows that the value of the Chernoff functional is with probability
at least 1 ? ? an upper bound on the cost obtained by following the corresponding Chernoff policy.
These two observations suggests that by tuning ? from 0 to 1 we can find a family of risk-aware
policies, in order of risk aversion. Our experiments support this hypothesis (Section 5).
Property (i) shows a connection between our approach and percentile optimization. Although we
are not optimizing the ?-percentile directly, our method provides guarantees about it. Properties
(iv) and (v) show a connection between optimizing the Chernoff functional and mean minus variance optimization, which has been proposed before for risk-aware planning, but was found to be
intractable in general [3]. Via property (v), we can optimize mean minus variance with a low weight
on variance if we set ? close to 1. In the limit, this allows us to optimize the expectation, while
breaking ties in favor of lower variance. Property (iv) show that we can optimize mean minus scaled
standard deviation exactly if the total cost is Gaussian. Typically, this will not be the case, but, if
the MDP is ergodic and the time horizon is large enough, the total cost will be close to Gaussian, by
the central limit theorem. To see why this is true, note that, by the Markov property, costs between
successive returns to the same state are i.i.d. random variables [19]. Our formulation ties into mean
minus standard deviation optimization, which is of consistent dimensionality, unlike the classical
mean minus variance objective.
4
Optimizing the Proposed Objective
Finding the policy that optimizes our proposed objective at a given risk level ? amounts to a joint
optimization problem (Bellman optimality does not hold for our objective; see Appendix for discussion):
h
i
?
min Cs,?
[J] = inf ? log min Es,? eJ/? ? ? log(?)
(2)
?
?
?>0
h
i
= inf (f (?) ? ? log(?)) where f (?) := ? log min Es,? eJ/? .
?
?>0
The inner optimization problem, the optimization over policies ?, is simply exponential utility optimization, a classical problem that can be solved efficiently. For brevity, we will not discuss solutions
to this problem and, instead, refer the readers to [12, 13]. The main difficulty is solving the outer
optimization problem, over the scale variable ?. Unfortunately, this problem is not convex and may
have a large number of local minima. Our main algorithmic contribution consists of an approach for
solving the outer (non-convex) optimization problem efficiently to some specified precision ?.
Based on Theorems 1 and 2 (below), we propose a method for finding the policy that minimizes the
Chernoff functional, to precision ?, with worst case time complexity O(h|S|2 |A|/?). It is summarized in Algorithm 1. Our approach is to solve the optimization problem in (2) with an approximation
of the function f (Figure 1 shows a example plot of this function). The algorithm maintains such
an approximation and improves it as needed up to a precision of ?. In practice we might want to
run the algorithm for more than one setting of ? to find policies for the same planning task at different levels of risk aversion, say at n different levels. Naively, the time complexity of doing this
4
Algorithm 1 Near optimal Chernoff bound algorithm
f? ? empty hash map
. will store incremental approximation of f defined in Eq. 2
f?[0] ? f (0)
. minimax cost of the MDP
f?[?] ? f (?)
. expectimin cost of the MDP
for ? ? {1, 10, 100, ? ? ? }, until f?[?] ? f?[?] < ?, do
. find upper bound
f?[?] ? f (?)
. exponential utility optimization
for ? ? {1, 0.1, 0.01, ? ? ? }, until f?[?] ? f?[0] < ?, do
. find lower bound
f?[?] ? f (?)
. exponential utility optimization
repeat
?? ? argmin{? ? keys(f?) : f?[?] ? ? log(?)},
. argmin over previously computed costs
1/2
? ? ?? ? min{? > ?? , ? ? keys(f?)}
. split interval at geometric mean
f? [?] ? f (?)
. exponential utility optimization
until f?[?? ] ? f?[?] < ?
. until f? is an ?-accurate approximation of f
return optimal exponential utility policy(MDP, 1/?? ).
would be O(nh|S|2 |A|/?) but, fortunately, our function approximation can be reused between subsequent runs of the algorithm, saving computation time, so the total complexity will, in fact, be only
O(h|S|2 |A|/? + n).
Properties (ii) and (iii) of Theorem 1 imply that f (0) can be computed by minimax optimization
and f (?) can be computed by value iteration (expectimin optimization), which both have the same
time complexity as exponential utility optimization: O(h|S|2 |A|). Once we have computed these
limits, the next step in the algorithm is finding some appropriate bounding interval, [?1 , ?2 ], such that
f (0) ? f (?1 ) < ? and f (?2 ) ? f (?) < ?. We do this by first searching over ? = 1, .1, 10?2 , ? ? ? ,
and, then, over ? = 1, 10, 102 , ? ? ? . For a given machine architecture, the number of ? values is
bounded by the number format used in the implementation. For example, working with double
precision floating-point numbers limits the number of ? evaluations to 2 ? 1023, implied by the
fact that exponents are only assigned 11 bits. In our experiments, this step takes 10-15 function
evaluations. Now, for any given risk level, ?, we will find ?? that minimizes the objective, f (?) ?
? log(?), among those ? where we have already evaluated f . We will, then, evaluate f at a new point:
the geometric mean of ?? and its closest neighbor to the right. We stop iterating when the function
value at the new point is less than ? away from the function value at ?? , and return the corresponding
optimal exponential utility policy. Consequently, our algorithm evaluates f at a subset of the points
{?1 (?2 /?1 )i/n : i = 0, ? ? ? , n} where n is a power of 2. Theorem 2 guarantees that to get an ?
guarantee for the accuracy of the optimization it suffices to perform n(?) = O(1/?) evaluations of
f , where we are now treating log(?2 ) ? log(?1 ) as a constant. Therefore, the number of functions
evaluations is O(1/?), and, since the time complexity of every evaluation is O(h|S|2 |A|), the total
time complexity of the algorithm is O(h|S|2 |A|/?).
Theorem 2. Consider the interval 0 < ?1 < ?2 split up into n sub-intervals by ?in = ?1 (?2 /?1 )i/n ,
and let f?n (?) := f (maxi?0???n {?in < ?}) be our piecewise constant approximation to the function f (?) defined in Equation (2). Then, for a given approximation error ? there exists n(?) =
O((log(?2 ) ? log(?1 ))/?) such that |f?n(?) (?) ? f (?)| ? ? for all ? ? [?1 , ?2 ].
Proof sketch. The key insight when provingthis theorem
is bounding rate of change of f . We
can immediately see that f? (?) := ? log Es,? eJ/? is a convex function since it is the perspective
transformation of a convex function, namely, the cumulant generating function of the total cost J.
Additionally, Theorem 1 shows that f? is lower bounded by Es,? [J], assumed to be finite, which
implies that f? is non-increasing. On the other hand, by directly differentiating the definition of f? ,
we get that ?f?0 (?) = f? (?) ? Es,? [JeJ/? ]/Es,? [eJ/? ].
Since we assumed that the costs, J, are upper bounded, there exist a maximum cost jM such that
J ? jM almost surely for any starting state s, and any policy ?. We have also shown that f? (?) ?
Es,? [J] ? jm := min?0 Es,?0 [J], so we conclude that ?(jM ? jm )/? ? f?0 (?) ? 0 for any policy,
?. Now that we have bounded the derivative of f? we can see that the value of f can not change too
5
#
#
#
#
#
#
? ? ? {0.75, 0.9, 0.99, 1.0 (expectimin)}
$
#
#
#
# #
#
#
? ? = .6
? ? ? {0.1, 0.3}
? ? ? {10?3 , 10?4 , 10?5 , 10?6 , 10?7 }
? ? ? {10?10 , 10?8 }
Figure 2: Chernoff policies for the Grid World MDP. See text for complete description. The colored
arrows indicate the most likely paths under Chernoff policies for different values of ?. The minimax
policy (? = 0) acts randomly since it assumes that any action will lead to a trap.
n
n
much over an interval [?i+1
, ?in ]. Let ?i := argmin? f? (?in ) and ?i+1 := argmin? f? (?i+1
). Then:
n
n
n
0 ? f (?in ) ? f (?i+1
) = f?i (?in ) ? f?i+1 (?i+1
) ? f?i+1 (?in ) ? f?i+1 (?i+1
)?
?
max
n
?in ????i+1
? (jM
n
n
|f?0 i+1 (?)| ? (?i+1
? ?in ) = ?f?0 i+1 (?in ) ? (?i+1
? ?in ) ?
?n ? ?n
? jm ) ? i+1 n i = (jM ? jm )
?i
?2
?1
1/n
!
?1 ,
(3)
where we first used the fact that f?i (?in ) = min? f? (?in ) ? f?i+1 (?in ), then the convexity of f?i+1
which implies that f?0 i+1 is increasing, and, finally, our previous derivative bound. Our final goal is
to find a value of n(?) such that the last expression in Equation 3 is less than ?. One can easily verify
that the following n(?) satisfies this requirement (the detailed derivation appears in the Appendix):
n(?) = d(jM ? jm )/? log (?2 /?1 ) + log (?2 /?1 )e .
5
Experiments
We ran a number of experiments to test that our proposed objective indeed captures the intuitive
meaning of risk-aware planning. The first experiment models a situation where it is immediately
obvious what the family of risk-aware policies should be. We show that optimizing the Chernoff
functional with increasing values of ? produces the intuitively correct family of policies. The second
experiment shows that our method can be applied successfully to a large scale, real world problem,
where it is difficult to immediately ?see? the risk-aware family of policies.
Our experiments empirically confirm some of the properties of the Chernoff functional proven in
Theorem 1: the probability that the return is lower than the value of the Chernoff policy at level ?
is always less than ?, setting ? = 1 corresponds to optimizing the expected return with the added
benefit of breaking ties in favor of lower variance, and setting ? = 0 leads to the minmax policy
whenever it is defined. Additionally, we observed that policies at lower risk levels, ?, tend to have
lower expectation but also lower variance, if the structure of the problem allows it. Generally, the
probability of extremely bad outcomes decreases as we lower ?.
5.1
Grid world
We first tested our algorithm on the Grid-World MDP (Figure 2). It models an obstacle avoidance
problem with stochastic dynamics. Each state corresponds to a square in the grid, and the actions,
{N, NE, E, SE, S, SW, W, NW}, typically cause a move in the respective direction. In unmarked
squares, the actor?s intention is executed with probability .93. Each of the seven remaining actions
might be executed instead, each with probability 0.01. Squares marked with $ and # are absorbing
states. The former gives a reward of 35 when entered, and the latter gives a penalty of 35. Any
other state transitions cost 1. The horizon is 35. To make the problem finite, we simply set the
6
? ? {.99, .999, 1.0 (expectimin)}:
15:45 SHV - DFW 16:45
18:25 DFW - MCO 21:50
23:15 MCO - BQN 02:46
Cumulative distribution function: P (V < v)
1
? ? {.3, .4, .5, .6, .7, .8, .9}:
0.8
10:46 SHV - ATL 13:31
14:10 ATL - EWR 16:30
18:00 EWR - BQN 23:00
0.6
? ? {.99, .999, 1}
? ? {0.3, .4, ? ? ? .9}
? = 0.2
? ? {0, .001, .01, .1}
? = 0.2:
12:35 SHV - DFW 13:30
18:25 DFW - MCO 21:50
23:15 MCO - BQN 02:46
0.4
? ? {0 (minimax) , .001, .01, .1}:
0.2
12:35
14:25
17:50
23:40
SHV
DFW
MSY
JFK
-
DFW
MSY
JFK
BQN
13:30
15:50
21:46
04:20
(a) Paths under Chernoff policies assuming all flight arrive on time, shown
using International Air Transport Association (IATA) airport codes.
0
?8
?7
?6
?5
Total reward: v (seconds)
?4
?104
(b) Cumulative distribution functions of rewards (equals minus
cost) under Chernoff policies at different risk levels. The asterisk (*) indicates the value of the policy. The big O indicates the
expected reward and the small o?s correspond to expectation plusminus standard deviation. 10000 samples.
Figure 3: Chernoff policies to travel from Shreveport Regional Airport (SHV) to Rafael Hern?andez
Airport (BQN) at different risk levels.
probability of all transitions outside the grid boundary to zero, and re-normalize. We set the precision
to ? = 1. With this setting, our algorithm performed exponential utility optimization for 97 different
parameters when planning for 14 values of the risk level ?. For low values of ?, the algorithm behaves
cautiously, preferring longer, but safer routes. For higher values of ?, the algorithm is willing to take
shorter routes, but also accepts increasing amounts of risk.
5.2
Air travel planning
The aerial travel planning MDP (Figure 3) illustrates that our method applies to real-world problems
at a large scale. It models the problem of buying airplane tickets to travel between two cities, when
you care only about reaching the destination in a reliable amount of time. We assume that, if you
miss a connecting flight due to delays, the airline will re-issue a ticket for the route of your choice
leading to the original destination. In this case, a cautious traveler will consider a number of aspects:
choosing flights that usually arrive on time, choosing longer connection times and making sure that,
in case of a missed connection, there are good alternative routes.
In our implementation, the state space consists of pairs of all airports and times when flights depart
from those airports. At every state there are two actions: either take the flight that departs at that time,
or wait. The total number of state-action pairs is 124791. To keep the horizon low, we introduce
enough wait transitions so that it takes no more than 10 transitions to wait a whole day in the busiest
airport (about 1000 flights per day) and we set the horizon at 100. Costs are deterministic and correspond to the time difference between the scheduled departure time of the first flight and the arrival
time. We compute transition probabilities based on historical data, available from the Office of Airline Information, Bureau of Transportation Statistics, at http://www.transtats.bts.gov/.
Particularly, we have used on-time statistics for February 2011. Airlines often try to conceal statistics
for flights with low on-time performance by slightly changing departure times and flight numbers.
Sometimes, they do this every week. Consequently, we first clustered together all flights with the
same origin and destination that were scheduled to depart within 15 minutes of each other, under the
assumption they would have the same on-time statistics. We, then, remove all clusters with fewer
than 7 recorded flights, since these usually correspond to incidental flights.
7
80
20
60
40
10
20
0
0
0
20
40
60
80
100
0
(a) Number of exponential utility optimization runs
to compute the Chernoff policies.
2
4
6
8
10
12
(b) Number of distinct Chernoff policies found.
Figure 4: Histograms demonstrating the efficiency and relevance of our algorithm on 500 randomly
chosen origin - destination airport pairs, at 15 risk levels.
To test our algorithm on this problem, we randomly chose 500 origin - destination airport pairs and
computed the Chernoff policies for risk levels: ? ? {1.0, .999, .99, .9, .8, ? ? ? , .1, 0.01, 0.001, 0.0},
and precision ? = 10 minutes. Figure 3 shows the resulting policies and corresponding cost (travel
time) histograms for one such randomly chosen route. To address the question of computational
efficiency, Figure 4a shows a histogram of the total number of different parameters for which our
algorithm ran exponential utility optimization. To address the question of relevance, Figure 4b shows
the number of distinct Chernoff policies found among the risk levels. Two policies, ? and ? 0 , are
considered distinct if the total variation distance of the induced state - action occupation measures
is more than 10?6 ; that is, if there exists t, s, and a such that |P? {St = s, At = a} ? P?0 {St =
s, At = a}| ? 10?6 . For most origin - destination pairs we found a rich spectrum of distinct
policies, but there are also cases where all the Chernoff policies are identical or only the expectimax
and minimax policies differ.
Many air travel routes exhibit only two phases mainly because they connect small airports where
only one or two flights of the type we consider land or take off per day. Consequently there will be
few policies to choose from in these cases. In our experiment, we chose 200 origin and destination
pairs at random and, of these, 72 routes show only two phases. In 41 of these cases, either the
origin or the destination airport serves only one or two flights per day total. Only 9 of the two-phase
routes connect airports which both serve more than 10 flights per day total, and, of course, not all of
these flight will help reach the destination. Thus, typically the reason we see only two phases is that
the choice of policies is very limited. Additionally, airlines have an incentive to provide sufficient
margin such that passengers can make connections and they don?t have to re-ticket them. That is,
they tend to set up routes such that, even in a worse than average scenario, the original route will
tend to succeed.
6
Conclusion
We proposed a new optimization objective for risk-aware planning called the Chernoff functional.
Our objective has a free parameter ? that can be used to interpolate between the nominal policy,
which ignores risk, at ? = 1, and the minmax policy, which corresponds to extreme risk aversion,
at ? = 0. The value of the Chernoff functional is with probability at least 1 ? ? an upper bound on
the cost incurred by following the corresponding Chernoff policy. We established a close connection between optimizing the Chernoff functional and mean minus variance optimization, which has
been proposed before for risk-aware planning, but was found to be intractable in general. We also
establish a close connection with optimization of mean minus scaled standard deviation.
We proposed an efficient algorithm that optimizes the Chernoff functional to any desired accuracy ?
requiring O(1/?) runs of exponential utility optimization. Our experiments illustrate the capability
of our approach to recover a spread of policies in the spectrum from risk neutral to minmax requiring
a running time that was on average about ten times the running of value iteration.
8
References
[1] G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati, A. Lakshman, A. Pilchin, S. Sivasubramanian, P. Vosshall, and W. Vogels. Dynamo: amazon?s highly available key-value store.
ACM SIGOPS Operating Systems Review, 41(6):205?220, 2007.
[2] Philippe Jorion. Value at risk: the new benchmark for managing financial risk, volume 1.
McGraw-Hill Professional, 2007.
[3] Shie Mannor and John N. Tsitsiklis. Mean-Variance Optimization in Markov Decision Processes. In Proceedings of the 28 International Conference on Machine Learning, 2011.
[4] Erick Delage and Shie Mannor. Percentile optimization in uncertain Markov decision processes
with application to efficient exploration. ICML; Vol. 227, page 225, 2007.
[5] Jay K. Satia and Roy E. Lave Jr. Markovian Decision Processes with Uncertain Transition
Probabilities. Operations Research, 21(3):728?740, 1973.
[6] Matthias Heger. Consideration of risk in reinforcement learning. In Proceedings of the 11th
International Machine Learning Conference (1994), pages 105?111. Morgan Kaufmann, 1994.
[7] Steve Levitt and Adi Ben-Israel. On Modeling Risk in Markov Decision Processes. Optimization and Related Topics, pages 27?41, 2001.
[8] Erick Delage and Shie Mannor. Percentile Optimization for Markov Decision Processes with
Parameter Uncertainty. Operations Research, 58(1):203?213, 2010.
[9] Arnab Nilim and Laurent El Ghaoui. Robust Control of Markov Decision Processes with
Uncertain Transition Matrices. Operations Research, 53(5):780?798, 2005.
[10] M. Bouakiz and Y. Kebir. Target-level criterion in Markov decision processes. Journal of
Optimization Theory and Applications, 86(1):1?15, July 1995.
[11] Congbin Wu and Yuanlie Lin. Minimizing Risk Models in Markov Decision Processes with
Policies Depending on Target Values. Journal of Mathematical Analysis and Applications,
231(1):47?67, 1999.
[12] S.I. Marcus, E. Fern?andez-Gaucherand, D. Hern?andez-Hernandez, S. Coraluppi, and P. Fard.
Risk sensitive Markov decision processes. Systems and Control in the Twenty-First Century,
29:263?281, 1997.
[13] VS Borkar and SP Meyn. Risk-sensitive optimal control for Markov decision processes with
monotone cost. Mathematics of Operations Research, 27(1):192?209, 2002.
[14] B. Defourny, D. Ernst, and L. Wehenkel. Risk-aware decision making and dynamic programming. In NIPS 2008 Workshop on Model Uncertainty and Risk in RL, 2008.
[15] Yann Le Tallec. Robust, Risk-Sensitive, and Data-driven Control of Markov Decision Processes. PhD thesis, Massachusetts Institute of Technology, 2007.
[16] Richard S. Sutton and Andrew G. Barto. Reinforcement learning: an introduction. MIT Press,
1998.
[17] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,
October 1996.
[18] J. F. Kenney and E. S. Keeping. Cumulants and the cumulant-generating function, additive
property of cumulants, and Sheppard?s correction. In Mathematics of Statistics, chapter 4.104.12, pages 77?82. Van Nostrand, Princeton, NJ, 2 edition, 1951.
[19] Richard Durrett. Probability: Theory and Examples. Cambridge University Press, 2010.
9
| 4514 |@word version:1 seems:2 reused:1 willing:1 pieter:1 incurs:1 minus:14 minmax:10 past:1 lave:1 com:1 john:2 additive:1 subsequent:1 enables:1 analytic:1 jfk:2 plot:2 treating:1 remove:1 hash:1 v:1 fewer:1 assurance:1 parametrization:2 colored:1 ps0:1 provides:2 mannor:3 successive:1 mathematical:1 become:1 consists:3 introduce:1 market:2 indeed:1 kenney:1 expected:10 planning:16 bellman:1 buying:2 decreasing:1 gov:1 jm:13 considering:1 increasing:4 bounded:6 moreover:1 linearity:1 notation:2 what:1 israel:1 argmin:4 minimizes:2 heger:1 finding:3 transformation:1 nj:1 guarantee:9 berkeley:6 every:4 act:1 tie:3 preferable:1 exactly:1 wrong:2 k2:1 scaled:2 control:4 bertsekas:1 before:2 service:1 local:1 limit:4 sutton:1 laurent:1 path:2 interpolation:1 hernandez:1 might:5 chose:3 suggests:1 limited:1 investment:1 practice:3 delage:2 significantly:1 fard:1 intention:1 road:1 wait:3 suggest:1 get:2 close:5 risk:60 context:2 applying:1 optimize:10 equivalent:1 map:1 customer:1 deterministic:1 transportation:1 www:1 go:2 economics:1 starting:3 convex:5 ergodic:1 amazon:2 immediately:3 insight:1 avoidance:1 importantly:1 counterintuitive:1 meyn:1 financial:1 proving:2 searching:1 atl:2 traditionally:1 variation:1 century:1 limiting:1 target:2 nominal:2 exact:2 programming:3 hypothesis:1 origin:6 crossing:1 jampani:1 roy:1 particularly:1 continues:1 showcase:1 native:1 observed:1 fly:1 solved:2 capture:1 worst:1 busiest:1 trade:1 decrease:1 cautiously:1 ran:2 balanced:1 intuition:4 convexity:1 complexity:6 reward:4 dynamic:5 depend:1 solving:3 serve:1 efficiency:2 easily:1 joint:1 stock:1 chapter:1 traveler:1 derivation:1 distinct:4 outcome:2 neighborhood:2 outside:1 choosing:2 widely:2 larger:1 solve:1 say:1 ability:1 favor:2 statistic:5 final:2 advantage:1 differentiable:1 matthias:1 propose:4 entered:1 ernst:1 amplified:1 intuitive:2 description:1 normalize:1 cautious:3 empty:1 double:1 requirement:1 cluster:1 produce:3 generating:6 incremental:1 leave:1 ben:1 help:1 illustrate:3 depending:1 andrew:1 planing:1 ticket:4 eq:1 strong:1 c:4 come:1 implies:2 indicate:1 differ:2 direction:1 correct:1 stochastic:3 exploration:1 human:1 dfw:6 everything:1 abbeel:1 andez:4 suffices:1 clustered:1 pessimistic:1 strictly:1 correction:1 hold:1 considered:1 exp:1 algorithmic:1 nw:1 week:1 travel:8 lose:1 sensitive:3 largest:1 agrees:1 successfully:1 city:1 weighted:1 mit:1 gaussian:4 always:1 rather:2 reaching:1 ej:9 varying:1 barto:1 office:1 inherits:2 markovitz:1 indicates:2 mainly:1 sense:1 dependent:2 el:1 typically:3 issue:1 among:2 exponent:1 airport:13 equal:2 aware:20 once:3 construct:1 saving:1 chernoff:33 identical:1 icml:1 others:1 report:1 np:1 piecewise:1 few:1 richard:2 randomly:4 interpolate:2 floating:1 phase:4 highly:1 evaluation:5 extreme:3 accurate:1 necessary:1 experience:1 retirement:1 shorter:2 respective:1 sigops:1 iv:4 old:1 re:5 desired:1 theoretical:2 uncertain:3 modeling:1 obstacle:1 markovian:1 cumulants:4 bts:1 maximization:1 signifies:1 cost:28 introducing:1 deviation:4 subset:1 neutral:1 delay:2 too:1 motivating:1 connect:2 synthetic:2 chooses:1 st:10 international:3 preferring:1 destination:9 off:2 connecting:2 together:1 thesis:1 again:1 ambiguity:1 central:1 recorded:1 choose:1 leveraged:1 worse:1 stochastically:1 derivative:2 leading:1 return:11 dimitri:1 concordance:1 summarized:2 includes:1 explicitly:1 vi:2 passenger:1 performed:1 try:3 doing:1 sup:1 start:1 recover:1 option:1 maintains:1 capability:1 contribution:2 minimize:1 square:3 air:3 accuracy:2 variance:18 kaufmann:1 efficiently:4 correspond:3 fern:1 reach:1 whenever:1 definition:1 failure:2 evaluates:1 coraluppi:1 obvious:1 proof:3 stop:1 massachusetts:1 lim:3 dimensionality:1 improves:1 appears:3 mco:4 steve:1 higher:1 day:5 follow:1 formulation:1 evaluated:1 though:1 just:1 until:4 traveling:1 flight:19 sketch:2 working:1 hand:1 transport:1 quality:1 scheduled:2 vogels:1 scientific:1 mdp:11 usa:2 verify:1 true:1 requiring:2 former:2 discounting:3 assigned:1 illustrated:1 ignorance:1 game:3 percentile:12 criterion:2 arrived:1 hill:1 complete:1 meaning:1 consideration:1 recently:1 absorbing:2 behaves:1 functional:15 empirically:1 overview:1 rl:1 exponentially:1 nh:1 volume:1 discussed:1 association:1 refer:3 mihai:1 cambridge:1 tuning:1 grid:6 mathematics:2 portfolio:1 actor:2 longer:3 operating:1 gt:2 closest:2 perspective:1 optimizing:15 inf:3 optimizes:2 driven:1 scenario:1 route:12 certain:1 store:2 nostrand:1 inequality:1 meeting:1 morgan:1 minimum:1 fortunately:1 care:2 managing:2 surely:2 converge:1 july:1 ii:4 desirable:1 smooth:1 long:2 lin:1 neuro:1 essentially:1 expectation:14 iteration:5 sometimes:2 represent:1 histogram:3 arnab:1 background:2 crash:1 want:1 addressed:1 interval:5 regional:2 unlike:1 airline:4 sure:2 induced:1 tend:4 shie:3 gts:1 effectiveness:1 call:1 near:2 iii:4 enough:2 split:2 architecture:1 approaching:1 inner:1 airplane:1 expression:2 utility:19 penalty:1 cause:1 action:12 useful:1 iterating:1 teodor:1 detailed:1 generally:1 se:1 amount:3 ten:1 ph:1 http:1 exist:1 overly:2 correctly:1 per:4 conform:1 incentive:1 vol:1 key:4 demonstrating:1 changing:1 relaxation:2 monotone:1 sum:1 run:6 letter:1 uncertainty:4 you:8 arrive:3 family:5 almost:2 reader:2 wu:1 yann:1 missed:1 draw:1 decision:17 prefer:1 appendix:4 bit:1 bound:11 ki:2 played:1 your:1 software:1 aspect:1 min:9 formulating:1 optimality:1 extremely:1 format:1 department:2 according:1 aerial:1 jr:1 slightly:1 making:3 intuitively:1 ghaoui:1 computationally:1 equation:3 previously:3 hern:3 bqn:6 eventually:1 discus:2 needed:1 mind:1 tractable:1 end:2 serf:1 travelling:1 available:4 operation:4 moldovan:2 observe:1 away:1 appropriate:2 alternative:1 professional:1 original:2 bureau:1 assumes:1 remaining:1 ensure:1 include:1 conceal:1 running:2 wehenkel:1 sw:1 k1:1 establish:1 february:1 classical:4 implied:1 objective:27 move:1 already:1 added:1 depart:2 question:2 strategy:1 defourny:1 exhibit:1 distance:1 majority:1 outer:2 athena:1 topic:2 seven:1 reason:3 marcus:1 sheppard:1 assuming:1 code:1 minimizing:2 difficult:3 unfortunately:2 executed:2 disastrous:1 october:1 relate:1 shv:6 implementation:2 incidental:1 proper:1 policy:56 twenty:1 perform:1 upper:6 observation:1 markov:15 benchmark:1 finite:5 philippe:1 situation:1 extended:1 introduced:1 pair:7 namely:1 specified:1 tallec:1 connection:12 california:2 accepts:1 established:1 hour:1 nip:1 address:2 below:3 usually:2 departure:2 including:1 max:1 reliable:1 power:1 event:1 natural:1 difficulty:1 minimax:6 technology:1 mdps:5 imply:1 ne:1 text:1 review:1 literature:1 geometric:2 satia:1 occupation:1 law:2 loss:1 proven:1 var:3 pabbeel:1 asterisk:1 aversion:6 awareness:5 incurred:1 sufficient:1 consistent:1 s0:5 land:1 course:1 repeat:1 last:1 free:1 keeping:1 tsitsiklis:2 allow:3 institute:1 neighbor:1 taking:1 differentiating:1 absolute:1 benefit:1 van:1 boundary:1 world:8 transition:10 cumulative:2 rich:1 ignores:2 author:1 expectimax:1 reinforcement:2 durrett:1 historical:2 approximate:1 rafael:2 mcgraw:1 keep:1 dealing:1 confirm:1 assumed:2 conclude:1 spectrum:3 don:1 investing:1 why:1 additionally:5 robust:2 ca:2 adi:1 sp:1 main:4 spread:1 arrow:1 bounding:2 unmarked:1 big:1 whole:1 arrival:1 edition:1 levitt:1 precision:6 sub:2 nilim:1 exponential:19 breaking:2 late:1 jay:1 theorem:11 departs:1 minute:2 bad:1 showing:1 maxi:1 intractable:3 exists:3 naively:1 trap:1 adding:1 erick:2 workshop:1 phd:1 illustrates:1 horizon:5 margin:1 jej:1 simply:3 likely:1 infinitely:1 borkar:1 applies:2 corresponds:4 chance:1 satisfies:1 acm:1 succeed:1 goal:1 marked:1 consequently:4 hard:1 change:2 safer:1 specifically:1 miss:2 called:3 total:13 e:13 player:3 support:2 latter:1 arises:1 cumulant:6 brevity:1 relevance:2 evaluate:1 princeton:1 tested:1 |
3,883 | 4,515 | Strategic Impatience in Go/NoGo versus
Forced-Choice Decision-Making
Angela J. Yu
Cognitive Science Department
University of California, San Diego
La Jolla, CA, 92093
[email protected]
Pradeep Shenoy
Cognitive Science Department
University of California, San Diego
La Jolla, CA, 92093
[email protected]
Abstract
Two-alternative forced choice (2AFC) and Go/NoGo (GNG) tasks are behavioral
choice paradigms commonly used to study sensory and cognitive processing in
choice behavior. While GNG is thought to isolate the sensory/decisional component by eliminating the need for response selection as in 2AFC, a consistent tendency for subjects to make more Go responses (both higher hits and false alarm
rates) in the GNG task raises the concern that there may be fundamental differences in the sensory or cognitive processes engaged in the two tasks. Existing
mechanistic models of these choice tasks, mostly variants of the drift-diffusion
model (DDM; [1, 2]) and the related leaky competing accumulator models [3, 4],
capture various aspects of behavioral performance, but do not clarify the provenance of the Go bias in GNG. We postulate that this ?impatience? to go is a strategic adjustment in response to the implicit asymmetry in the cost structure of the
2AFC and GNG tasks: the NoGo response requires waiting until the response
deadline, while a Go response immediately terminates the current trial. We show
that a Bayes-risk minimizing decision policy that minimizes not only error rate
but also average decision delay naturally exhibits the experimentally observed Go
bias. The optimal decision policy is formally equivalent to a DDM with a timevarying threshold that initially rises after stimulus onset, and collapses again just
before the response deadline. The initial rise in the threshold is due to the diminishing temporal advantage of choosing the fast Go response compared to the fixeddelay NoGo response. We also show that fitting a simpler, fixed-threshold DDM
to the optimal model reproduces the counterintuitive result of a higher threshold in
GNG than 2AFC decision-making, previously observed in direct DDM fit to behavioral data [2], although such fixed-threshold approximations cannot reproduce
the Go bias. Our results suggest that observed discrepancies between GNG and
2AFC decision-making may arise from rational strategic adjustments to the cost
structure, and thus need not imply any other difference in the underlying sensory
and cognitive processes.
1
Introduction
The two-alternative forced-choice (2AFC) task is a standard experimental paradigm used in psychology and neuroscience to investigate various aspects of sensory, motor, and cognitive processing
[5]. Typically, the paradigm involves a forced choice between two responses based on a presented
stimulus, with the measured response time and accuracy of choices shedding light on the cognitive
and neural processes underlying behavior. Another paradigm that appears to share many features
of the 2AFC task is the Go/NoGo (GNG) task [6], (see Luce [5] for a review), where one stimulus
category is associated with an overt Go response that has to be executed before a response dead1
line, and the other stimulus (NoGo) requires withholding response until the response deadline has
elapsed. In principle, the GNG task could be used to probe the same decision-making problems as
the 2AFC task, with the possible advantage of eliminating a ?response selection stage? that may
follow the decision in the 2AFC task [6, 7]. Indeed, the GNG task has been used to study various
aspects of human and animal cognition, e.g., lexical judgements [8, 9], perceptual decision-making
[10, 11, 12], and the neural basis of choice behavior (in particular, distinguishing among neural
activations associated with stimulus, memory, and response) [13, 14, 15]. However, experimental
evidence also indicates that there is a curious choice bias toward the overt (Go) response in the GNG
task [11, 16, 2, 15], in the form of shorter response times and more false alarms for the Go response,
than when compared to the same stimulus pairings in a 2AFC task [2, 16]. It has been suggested that
this choice bias may reflect differential sensory and cognitive processes underlying the two tasks,
and thus making the two non-interchangeable in the study of perception and decision-making.
In this paper, we hypothesize that this discrepancy may simply be due to differences in the implicit
reward (cost) structure of the two tasks: the NoGo response incurs a higher imposed waiting cost
than the Go response, since the NoGo response must wait until the response deadline has passed to
register, while a Go response immediately terminates the trial. In contrast, in the 2AFC task, the cost
function is symmetric for the two alternatives, whether in terms of error or delay. We propose that
the implicit cost structure difference in GNG can fully account for the Go bias in GNG compared
to 2AFC tasks, without the need to appeal to other differences in sensory or cognitive processing.
To investigate this hypothesis, we adopt a Bayes risk minimization framework for both the 2AFC
and GNG tasks, whereby sensory processing is modeled as iterative Bayesian inference of stimulus
type based on a stream of noisy sensory input, and the decision of when/how to respond rests on
a policy that minimizes a linear combination of expected decision delay and response errors. The
optimal decision policy for this Bayes-risk formulation in the 2AFC task is known as the sequential
probability ratio test (SPRT; [17, 18]), and has been shown to account for both behavioral [19, 4]
and neural data [19, 20]. Here, we generalize this theoretical framework to account for both 2AFC
and GNG decision-making in a unified framework, by assuming that a subject?s sensory and perceptual processing (of the same pair of stimuli) and the relative preference for decision accuracy versus
speed are shared across 2AFC and GNG, with the only difference between them being the asymmetric temporal cost implicit in the reward structure of the GNG task ?the Go response terminating
a trial while the NoGO response only registering after the response deadline.
As a stochastic process, SPRT is a bounded random walk, whereby the stochasticity in the random
walk comes from noise in the observation process. The continuum (time) limit of a bounded random
walk is the bounded drift-diffusion model (DDM), which generally assume a stochastic dynamic
variable to undergo constant drift, as well as diffusion due to Wiener noise, until one of two finite
thresholds is breached. In psychology, DDM has been augmented with additional parameters such as
a non-decision-related repsonse delay, variability in drift-rate, and variability in starting point across
trials. Figure 4A shows a simple variant of the DDM illustrating the following parameters: rate
of accumulation, threshold, and ?nondecision time? or temporal offset to the start of the diffusion
process. These augmented DDMs have been used to model behavior in 2AFC tasks [21, 22, 23,
5, 24, 4], and also appear to provide good descriptive accounts of the neural activities underlying
perceptual decision-making [25, 20, 26, 27]. Variants of augmented DDM have also been utilized to
fit data in other simple decision-making tasks, including the GNG task [2]. While augmenting DDM
with extra parameters gives it additional power in explaining subtleties in data, this also diminishes
the normative interpretability of DDM fits by eliminating its formal relationship to the optimal SPRT
procedure. As a consequence, when the behavioral objectives change, e.g., in the GNG task, DDM
cannot predict a priori what parameters ought to change and how much. Instead, we begin with a
Bayes-risk minimization formulation and derive the non-parametric optimal decision-procedure as
a function of sensory statistics and behavioral objectives. We then map the optimal policy to the
DDM model space, and compare directly with previously proposed DDM variants in the context of
2AFC and GNG tasks.
In the following sections, we first describe our proposed Bayesian inference and decision-making
model, then compare simulations of the optimal decision-making model with published experimental data of subjects performing perceptual decision-making in 2AFC and GNG tasks [16]. We also
explore other evidence exploring the degree of go bias in the GNG task [28]. Next, we consider
the formal relationship between the optimal model and a fixed-threshold DDM that was previously
utilized to fit behavioral data from the GNG task [2, 12]. Finally, we present novel experimental
2
Data: Error rate
400
RT (ms)
Error Rate
0.15
0.1
0.05
0
Data: RT
2AFC
GNG
no/nogo
200
0
yes/go
Model: Error rate
Model: RT
RT (steps)
Error Rate
Figure 1: Systematic error0.2biases in the GNG task. (A) The 15
figure shows error rates associated
2AFC
with a perceptual decision-making task performed by subjects
in
both Go/NoGo and Yes/No (forced
GNG
0.15
choice) settings. Although the error rates in the forced choice settings
were similar for both classes,
10
there was a significant bias0.1towards the Go response in the GNG task, with more false alarms than
omission errors. (B) Mean response time on the GNG task was lower
than for the same stimulus on
5
0.05 from Bacon-Mace et al., 2007).
the 2AFC task. (Data adapted
0
left/nogo
right/go
0
Stimulus
predictions of the optimal decision-making model, including those that specifically differ from the
fixed-threshold DDM approximation [2, 12].
2
Bayesian inference and risk minimization in choice tasks
Human choice behavior in the GNG and 2AFC tasks exhibits a consistent Go bias in the GNG task
that is not apparent for the same stimulus in the 2AFC task. For example, Figure 1 shows data
from a task in which subjects must identify whether a briefly-presented noisy image contains an
animal or not [16], under two different response conditions: GNG (only respond to animal-present
images), and 2AFC (respond yes/no to each image). Subjects showed a significant bias towards the
Go response in the GNG task, in the form of higher false alarms than omission errors (Figure 1A),
as well as faster RT than for the same stimulus in the 2AFC task (Figure 1B).
For the 2AFC task, a large body of literature supports the ?accumulate-to-bound? model of perceptual decision-making, [23, 20, 26], where moment-to-moment sensory input (?evidence? in favor of
either choice) is accumulated over time until it reaches a bound, at which point, a response is generated. Previous work by Yu & Frazier [29] extended the formulation to include 2AFC tasks with a
decision deadline, in which subjects have the additional constraint of not exceeding a decision deadline. They showed that the optimal policy for decision-making under a deadline is to accumulate
evidence up to time-varying thresholds that collapse toward each other over time, leading to more
?liberal? choices and higher error rate in later responses than earlier ones. Here, we generalize the
framework to model the GNG task. In particular, the same deadline by which the subject must make
a response (or else be counted as a ?miss?) on a Go trial, is the one for which the subject must withold response (or else be counted as a ?false alarm?). We model evidence accumulation as iterative
Bayesian inference over the identity of the stimulus, and decision-making as an iterative decision
policy that chooses whether to respond (and which one in 2AFC) or continue observing at least one
more time point, based on current evidence. The optimal policy minimizes the expected value of a
cost function that depends linearly on decision delay and errors. The model is described below.
2.1
Evidence integration as Bayesian inference
We model evidence accumulation, in both 2AFC and GNG, as iterative Bayesian inference about the
stimulus identity conditioned on an independent and identically distributed (i.i.d.) stream of sensory
input. Specifically, we assume a generative model where the observations are a continual sequence
of data samples x1 , x2 , . . ., iid-generated from a likelihood function f0 (x) or f1 (x) depending on
whether the true stimulus state is d = 0 or d = 1, respectively. This incoming stream therefore
provides accumulating evidence of the hidden category label d ? {0, 1}. For concreteness, we
assume the likelihood functions are Gaussian distributions with means ?? (+ for d = 1, - for
d = 0), and a variance parameter ? 2 controlling the noisiness of the stimuli.
3
Data: RT
2AFC
GNG
0.05
0
Decision threshold
A
B
Error Rate
Belief
0.5
0
0
no/nogo
20
Time
40
0
Model:
Errorrate
rate
Model:
Error
C
2AFC
GNG
0.15
0.1
0.05
0
200
yes/go
0.2
1
400
RT (ms)
0.1
left/nogo
right/go
Model:
Model:
RTRT
15
RT (steps)
Error Rate
0.15
10
5
0
Stimulus
Figure 2: Rational behavior in 2AFC and GNG tasks. (A) The figure shows the decision threshold
as a function of belief state across the 2AFC and GNG tasks. The optimal decision boundary for
2AFC is a pair of parallel thresholds (solid line) that collapse and meet at the response deadline (indicated by dashed vertical line). The optimal GNG decision boundary is a single initially increasing
threshold (dashed line), that decreases to 0.5 at the response deadline. (B;C) Monte Carlo simulation
of the optimal policy show a bias towards the overt response in the GNG task. The two response
alternatives in the 2AFC task are represented as ?left? and ?right?, corresponding to ?nogo? and
?go? in the GNG task (B). The GNG task shows lower miss rate and higher false alarm rate than the
corresponding 2AFC error rate (B), along with faster RT than the 2AFC task (C). Compare to the
experimental data in Figure 1. Parameter settings: c = 0.01, ? = 0.25, D = 40 timesteps.
The recognition model specifies the mechanism by which stimulus identity is inferred from the
noisy observations xt . In our model, we compute an posterior distribution over the category label
conditioned on the data sampled so far xt , (x1 , x2 , . . . xt ), bt , P {d = 1|xt }, also known as the
belief state, by iteratively applying Bayes? rule:
bt+1 =
bt f1 (xt+1 )
bt f1 (xt+1 ) + (1 ? bt )f0 (xt+1 )
(1)
where b0 , P {d = 1} is the prior probability of the stimulus category being 1 (and is 0.5 for
equally likely stimuli). We hypothesize that the same evidence accumulation mechanism underlies
decision-making in both tasks, in particular with the same noise process/likelihood functions, f0 (x)
and f1 (x), for a particular individual observing the same stimuli.
2.2
Action selection as Bayes-risk minimization
We model behavior in the two tasks as a sequential decision-making process where, at each instant,
the model decideses between two actions, as a function of the current evidence so far, encapsulated
in the current belief state bt : stop (and choose the response for the more probable stimulus category
for 2AFC), or continue one more time step. A stopping policy is a mapping from the belief state
to the action space, ? :bt 7? {stop, continue}, where the stop action in 2AFC also requires a
stimulus category decision ?. In accordance with the standard Bayes risk framework for optimizing
the decision policy in a stopping problem, we assume that the behavioral cost function is a linear
combination of the probability of making a decision error and the expected decision delay ? (the
stopping time if a response is emitted before the deadline, and the deadline D otherwise). We
assume that the decision delay component is weighted by a sampling or time cost c, while the cost
of all decision errors are penalized by the same magnitude and normalized to unit cost. Based on
this cost function, the optimal decision policy is the policy that minimizes the overall expected cost:
2AF C : L?
GN G : L?
= ch? i + P {? =
6 d} + P {? = D}
= ch? i + P {? = D|d = 1}P {d = 1} + P {? < D|d = 0}P {d = 0}
(2)
(3)
The 2AFC cost function is a special case of the more general scenario previously considered for
deadlined sequential hypothesis testing [29]: P {? 6= d} is the expected wrong response cost, while
P {? = D} is the expected cost of not responding before the deadline (omission error). In the GNG
cost function, P {? = D|d = 1} is the probability that no response is emitted before the deadline
on a Go trial (miss), P {? < D|d = 0} is the probability that a NoGo trial is terminated by a Go
4
nogo
0.2
0.1
20
50
% Nogo trials
Data: Error rate
0.4
go
0.2
0.1
go
10
FA
5
D
nogo
0.3
0
Model: RT
0
80
20
50
80
% Nogo trials
Data: RT
go
400
RT (ms)
Error rate
Error rate
go
0.3
0
C
B
Model: Error rate
0.4
RT (timesteps)
A
FA
300
200
100
20
50
0
80
% Nogo trials
20
50
80
% Nogo trials
Figure 3: Influence of stimulus statistics on Go bias. Our model predicts that alse alarms are more
frequent than misses (A), and are also faster than correct Go RTs (B). The Go bias, which is apparent
at 50% Go trials, is signficantly increased when Go trials are more frequent (80%), and reduced
when Go trials are reduced to 20% of the trials. Parameter settings: c = 0.014, ? = 0.45, D = 40
timesteps. (C-D) Human subjects exhibited a similar pattern of behavior in a letter discrimination
task (Data from Nieuwenhuis et al., 2003).
response (false alarm), a correct hit requires ? < D (responding before the deadline), and a correct
NoGo response consists of a series of continue actions until a predefined response deadline D. In
both GNG and 2AFC tasks, the choice to stop limits the decision delay cost, and the choice to
continue (up to a predefined response deadline D) results in the collection of more data that help
to disambiguate the stimulus category but at the cost of c per additional sample of data observed.
We compute the optimal policy using Bellman?s dynamic programming principle (Bellman, 1952).
Specifically, we iteratively compute the expected cost of continue and stop as a function of the belief
state bt (these are the Q-factors for continue and stop, Qc (bt ) and Qs (bt )). If Qc (bt ) < Qs (bt ), then
the optimal policy chooses to continue; otherwise, it chooses to stop; therefore, the belief state is
partitioned by the decision policy into a continuation region and a stopping region (details omitted
due to lack of space).
The principal difference between the two tasks as formulated here is the loss function. In the 2AFC
task, all trials are terminated by a response (unless the response deadline is exceeded). However, in
the GNG version, subjects have to wait until the response deadline to choose the NoGo response.
This introduces a significant, extra cost of time for NoGo responses, suggesting that it may in some
cases be better to select the Go response despite the relative inadequacy of sensory evidence. We
explore these aspects in detail in the following section.
3
Results
Opportunity cost and the Go/NoGo decision threshold
Figure 2A illustrates the difference between the optimal decision policies for the two tasks. The red
lines (solid: 2AFC, dashed: GNG) illustrates the optimal decision thresholds, which, when exceeded
by the cumulative sensory evidence bt , generate the corresponding response, as a function of time.
For the 2AFC task, the optimal policy is a pair of thresholds that are initially fairly constant over
time, but then collapse toward each other (into an empty set if the cost of exceeding the deadline
is sufficiently large) as the deadline approaches (cf. [29]). In contrast, the threshold for the GNG
5
Figure 4: Drift-diffusion model (DDM) for 2AFC and GNG tasks (A) A simplified version of the
DDM for 2-choice tasks, where a noisy accumulation process with a certain rate produces one of
two responses when it reaches a positive or negative threshold. In addition to the rate and threshold
parameters, a third parameter (the temporal offset to the start of the accumulation process) represents
the nondecision processes associated with visual and motor delays. (B) DDM fits to 2AFC and GNG
choice data(Gomez et al., 2007, Mack & Palmeri, 2010) suggest that the GNG task is associated
with a higher threshold and shorter offset than the 2AFC task. (C) Optimal decision-making model
predicts a lower, time-varying threshold for the GNG task.
task (dotted line) is a single threshold that varies over time, and is lower at the beginning of the
trial. This is a direct consequence of the opportunity cost involved with waiting until the deadline:
if the deadline is far away, the cost of waiting may be more than the cost of an immediate error that
terminates the trial; indeed, we expect that the farther away the deadline, the greater temporal cost
savings conferred by Go response over waiting to register the NoGo response.
Decision-making in 2AFC and GNG tasks
Figure 2B;C shows the effect of the time-varying threshold on RT and accuracy in an example model
simulation. Figure 2B shows that the GNG model is significantly biased towards the Go response,
with a higher fraction of false alarms than misses. This asymmetry is absent in the 2AFC model
performance. In addition, GNG response times are faster than 2AFC response times (Figure 2C).
This bias is a direct result of the time-varying threshold in the GNG task; early on in the trial, the
decision threshold is lower, and produces fast, error-prone responses.
This model prediction is consistent with data from human perceptual decision-making. Figure 1
shows behavioral data in the two tasks [16]? subjects determined from a brief presentation of a
noisy visual stimulus whether or not the image contained an animal. The same task was performed
in two response conditions: 2AFC, where each stimulus required a yes/no response, and GNG,
where subjects only responded to image containing the target. Figure 1A shows that in the 2AFC
condition, subjects are not significantly biased towards either response, with both false alarms and
miss rates being similar to each other. On the other hand, in the Go/NoGo condition, subjects
showed a significant bias towards the overt response, thus producing substantially more false alarms
and fewer misses. In the GNG task, their RT was significantly shorter than in the 2AFC task (Figure
1B). Similar results have also been reported by Gomez et al. in the context of lexical decisionmaking [2].
Influence of stimulus probability on Go bias
We investigate the degree of Go bias in the GNG model by considering the effect of trial type frequency on behavioral measures in the GNG task. Model simulations (Figure 3) show that, consistent
with Figure 2 and a host of other experimental data, there is a significant bias toward the Go response
when Go and NoGo trials are equiprobable, and this bias is increased (respectively diminished) as
NoGo trials are fewer or more frequent. The figure also shows that RT for both correct Go and erroneous NoGo responses increase with the frequency of NoGo trials, and that false alarm RT is faster
than correct response RT. In recent work, Nieuwenhuis et al. [28] used a block design to compare
choice accuracy and RT in a letter discrimination task when the fraction of NoGo trials was set to
20%, 50%, and 80%. As shown in Figure 3C;D, , subjects? behavior was reliably modulated by trial
type frequency, in a manner closely reflecting model predictions.
6
10
0.1
1.5
0.08
0.06
6
1
4
0.04
0.5
0.02
0
2AFC
Go?nogo
8
rate
0
2
thresh
0
offset
Figure 5: DDM approximation to optimal decision-making model. Simplified DDMs were fit to optimal model simulations of 2AFC and GNG behavior, and the best-fit parameters compared between
tasks. The DDM approximation for optimal GNG behavior shows a higher decision threshold (B),
and lower nondecision time (C), than the DDM approximation for the 2AFC task. In addition, the
rate of evidence accumulation was also lower for the GNG fit (A).
In our formulation, although the decision boundary is unchanged by the experimental manipulation,
the stimulus frequency induces a prior belief over the identity of the stimulus, and thus represents
the starting point for the evidence accumulation process. When Go trials are rare, the starting point
is far from the decision boundary, and it takes longer for a response to be generated. Further, due to
the extra evidence needed to overcome the prior, choices are less likely to be erroneous.
Drift-diffusion models and optimal behavior
Various versions of augmented DDM have been used to fit GNG behavioral data, with one variant
in particular suggesting that the decision threshold in GNG ought to be higher than 2AFC [2], in
an apparent contradiction to our model?s predictions (Figure 4). By fitting RT and choice data from
lexical judgment, numerosity judgment, and memory-based decision making tasks, Gomez et al. [2]
found that a DDM with an implicit negative boundary associated with the NoGo stimulus provided
a good fit to RT data. Further, joint parameter fits to 2AFC and GNG choice data indicated that the
principal difference in the two tasks was in the nondecision time and decision threshold; the rate
parameter (representing the evidence accumulation process) was similar in both tasks. In particular,
they suggested that the nondecision time was shorter, and the decision threshold higher than in
the 2AFC task (Figure 4B). These results were replicated by Mack & Palmeri by fitting DDM to
behavioral data from a visual categorization task performed in both 2AFC and GNG versions [12].
Although DDMs are formally equivalent to optimal decision-making in a restricted class of sequential choice problems [18], they do not explicitly represent and manipulate uncertainty and cost, as we
do in our Bayesian risk-minimization framework. In particular, our framework allows us to predict
that optimal behavior is well-characterized by a DDM with a time-varying threshold (Figure 4C),
and that the restricted class of constant-threshold DDMs are insufficient to fully explain observed
behavior. Nevertheless, we can ask whether our prediction is consistent with the empirical results
obtained from DDM fits with constant decision thresholds.
To address this, we computed the best constant-threshold DDM approximations to optimal decision
making in the two tasks. We simulated the optimal model with a shared set of parameters for both
the 2AFC and GNG tasks, and fit simplified random-walk models with 3 free parameters (Figure
4A) to the output of our optimal model?s simulations. Figure 5 shows that the best-fitting DDM
approximation for optimal GNG behavior has a higher threshold and a lower offset parameter than
the best-fitting DDM for optimal 2AFC task behavior.
Note that varying the magnitude of a symmetric (explicit and implicit) decision threshold is not capable of explaining the go bias towards the overt response. Gomez et al. also considered additional
variants of the DDM which allow for a change in the initial starting point, and for a different accumulation rate in the GNG task. These models, when fit to data, showed a bias towards the overt
response; however, the quality of fit did not significantly improve [2] .
Thus, our results and those of Gomez et al. [2] are conceptually consistent; a prinicipal difference
in the two tasks is the decision threshold, whereas the evidence accumulation process is similar
across tasks. However, our analysis explains precisely how and why the thresholds in the two tasks
are different: the GNG task has a time-varying threshold that is lower than the 2-choice threshold,
7
due to the difference in loss functions in the two tasks. In particular, our model accounts for the
bias towards the overt response, without recourse to an implicit decision boundary or additional
parameter changes. When optimal behavior is approximated by a simpler class of models (e.g.,
models with fixed decision threshold), the best fit to optimal GNG behavior turns out to be a higher
threshold and shorter nondecision time, as found by previous work [2, 12], and adjustments to the
initial starting point are required to explain the overt response bias.
4
Discussion
Forcing a choice between two alternatives is a fundamental technique used to study a wide variety
of perceptual and cognitive phenomena, but there has long been confusion over whether GNG and
2AFC variants of such tasks are probing the same underlying neural and cognitive processes. Our
work demonstrates that a common Bayes-optimal sequential inference and decision policy can explain the behavioral results in both tasks, as well as what was perceived to be a troubling Go bias in
the GNG task, compared to 2AFC. We showed that the Go bias arises naturally as a rational response
to the asymmetric time cost between Go and NoGo responses, as the former immediately terminates
the trial, while the latter requires the subject to wait until the end of the trial to record the choice.
The consequence of this cost asymmetry is an optimal decision policy that requires Bayesian evidence accumulation up to a time-varying boundary, which has an inverted-U shape: the initial low
boundary is due to the temporal advantage of choosing to Go early and save on the time necessary
to wait to register a NoGo response, the later collapsing of boundary is due to the expectation of the
deadline for responding. We showed that this optimal decision policy accounts for the general behavioral phenomena observed in GNG tasks, in particular accounting for the Go bias. Importantly,
our work shows that need not be any fundamental differences in the cognitive and neural processes
underlying perception and decision-making in these tasks, at least not on account of the Go bias.
Our model makes several novel experimental predictions for the GNG task: (1) for fast responses,
false alarm rate increases as a function of response time (in contrast, the fixed-threshold DDM approximation predicts a constant alarm rate); (2) lengthening the response deadline should exacerbate
the Go bias; (3) if GNG and 2AFC share a common inference and decision-making neural infrastructure, then our model predicts within-subject cross-task correlation: e.g. favoring speed over
accuracy in the 2AFC task should correlate with a greater Go bias in the GNG task.
The optimal decision policy for the GNG task can naturally be viewed as a stochastic process (though
it is normatively derived from task statistics and behavioral goals). We can therefore compare our
model to other stochastic process models previously proposed for the GNG task. Our model has
a single decision threshold associated with the overt response, consistent with some early models
proposed for the task (see e.g., Sperling et al. [30]). In contrast, the extended DDM framework
proposed by Gomez et al. has an additional boundary associated with the NoGo response (corresponding to a covert NoGo response). Gomez et al. report that single-threshold variants of the DDM
provided very poor fits to the data. Although computationally and behaviorally we do not require
a covert-response or associated threshold, it is nevertheless possible that neural implementations
of behavior in the task may involve an explicit ?NoGo? choice For instance, substantial empirical
work aims to isolate neural correlates of restraint, corresponding to a putative ?NoGo? action, by
contrasting neural activity on ?go? and ?nogo? (see e.g., [31, 32]). We will consider approximating
the optimal policy with one that includes this second boundary in future work.
8
References
[1] R Ratcliff and P L Smith. Psychol. Rev., 111:333?346, 2004.
[2] P Gomez, R Ratcliff, and M Perea. Journal of Experimental Psychology, 136(3):389?413,
2007.
[3] M Usher and J L McClelland. Psychol. Rev., 108(3):550?592, 2001.
[4] R. Bogacz, E. Brown, J. Moehlis, P. Holmes, and J.D. Cohen.
113(4):700, 2006.
Psychological Review,
[5] R.D. Luce. Number 8. Oxford University Press, USA, 1991.
[6] F.C. Donders. Acta Psychologica, 30:412, 1969.
[7] B. Gordon and A. Caramazza. Brain and Language, 15(1):143?160, 1982.
[8] Y Hino and SJ Lupker. Journal of experimental psychology. Human perception and performance, 26:166?183, 2000.
[9] M Perea, E Rosa, and C Gomez. Memory and Cognition, 30(34-45), 2002.
[10] S. Thorpe, D. Fize, C. Marlot, and Others. Nature, 381(6582):520?522, 1996.
[11] A. Delorme, G. Richard, and M. Fabre-Thorpe. Vision Research, 40(16):2187?2200, 2000.
[12] ML Mack and TJ Palmeri. Journal of Vision, 10:1?11, 2010.
[13] M.A. Sommer and R.H. Wurtz. J Neurophysiol., 85(4):1673?1685, 2001.
[14] RP Hasegawa, BW Peterson, and ME Goldberg. Neuron, 43(3):415?25, August 2004.
[15] G Aston-Jones, J Rajkowski, and P Kubiak. J Neurosci., 14:4467?4480, 1994.
[16] N. Bacon-Mac?e, H. Kirchner, M. Fabre-Thorpe, and S.J. Thorpe. J Exp. Psychol.: Human
Perception and Performance, 33(5):1013, 2007.
[17] A Wald. Dover publications, 1947.
[18] A. Wald and J. Wolfowitz. The Annals of Mathematical Statistics, 19(3):326?339, 1948.
[19] J.D. Roitman and M.N. Shadlen. J neurosci., 22(21):9475, 2002.
[20] J.I. Gold and M.N. Shadlen. Neuron, 36(2):299?308, 2002.
[21] M. Stone. Psychometrika, 25(3):251?260, 1960.
[22] D.R.J. Laming. Academic Press, 1968.
[23] R. Ratcliff. Psychological Review, 85(2):59, 1978.
[24] J.I. Gold and M.N. Shadlen. Annu. Rev. Neurosci., 30:535?574, 2007.
[25] D.P. Hanes and J.D. Schall. Science, 274(5286):427, 1996.
[26] M.E. Mazurek, J.D. Roitman, J. Ditterich, and M.N. Shadlen. Cerebral cortex, 13(11):1257,
2003.
[27] R Ratcliff, A Cherian, and M Segraves. Journal of neurophysiology, 90:1392?1407, 2003.
[28] S Nieuwenhuis, N Yeung, W van den Wildenberg, and KR Ridderinkhof. Cognitive, affective
& behavioral neuroscience, 3(1):17?26, March 2003.
[29] P. Frazier and A.J. Yu. Advances in neural information processing systems, 20:465?472, 2008.
[30] G. Sperling and B. Dosher. Handbook of perception and human performance., 1:2?1, 1986.
[31] D.J. Simmonds, J.J. Pekar, and S.H. Mostofsky. Neuropsychologia, 46(1):224?232, 2008.
[32] A.R. Aron, S. Durston, D.M. Eagle, G.D. Logan, C.M. Stinear, and V. Stuphorn. The Journal
of Neuroscience, 27(44):11860?11864, 2007.
9
| 4515 |@word neurophysiology:1 trial:28 illustrating:1 briefly:1 eliminating:3 judgement:1 version:4 simulation:6 accounting:1 incurs:1 solid:2 moment:2 initial:4 contains:1 series:1 cherian:1 existing:1 current:4 activation:1 must:4 shape:1 motor:2 hypothesize:2 discrimination:2 generative:1 fewer:2 rts:1 beginning:1 dover:1 smith:1 farther:1 record:1 infrastructure:1 provides:1 preference:1 liberal:1 simpler:2 mathematical:1 registering:1 along:1 direct:3 differential:1 pairing:1 consists:1 fitting:5 behavioral:16 affective:1 manner:1 lupker:1 expected:7 indeed:2 behavior:19 brain:1 bellman:2 ddms:4 considering:1 increasing:1 psychometrika:1 begin:1 provided:2 underlying:6 bounded:3 what:2 bogacz:1 minimizes:4 substantially:1 contrasting:1 unified:1 ought:2 temporal:6 continual:1 wrong:1 hit:2 demonstrates:1 unit:1 appear:1 producing:1 shenoy:1 before:6 positive:1 accordance:1 limit:2 consequence:3 despite:1 oxford:1 meet:1 acta:1 collapse:4 accumulator:1 testing:1 rajkowski:1 block:1 procedure:2 empirical:2 thought:1 significantly:4 wait:4 suggest:2 cannot:2 selection:3 risk:8 context:2 applying:1 influence:2 accumulating:1 accumulation:12 equivalent:2 imposed:1 map:1 lexical:3 go:62 starting:5 qc:2 immediately:3 contradiction:1 rule:1 q:2 holmes:1 importantly:1 counterintuitive:1 annals:1 diego:2 controlling:1 target:1 programming:1 distinguishing:1 goldberg:1 hypothesis:2 recognition:1 approximated:1 utilized:2 asymmetric:2 predicts:4 observed:6 capture:1 region:2 decrease:1 numerosity:1 substantial:1 reward:2 dynamic:2 terminating:1 raise:1 interchangeable:1 basis:1 neurophysiol:1 joint:1 various:4 represented:1 forced:6 fast:3 describe:1 monte:1 choosing:2 hino:1 apparent:3 otherwise:2 breached:1 favor:1 withholding:1 statistic:4 noisy:5 advantage:3 descriptive:1 sequence:1 propose:1 frequent:3 lengthening:1 gold:2 empty:1 asymmetry:3 decisionmaking:1 mazurek:1 produce:2 categorization:1 help:1 derive:1 depending:1 augmenting:1 measured:1 b0:1 involves:1 come:1 differ:1 closely:1 correct:5 stochastic:4 human:7 explains:1 require:1 f1:4 probable:1 exploring:1 clarify:1 sufficiently:1 considered:2 exp:1 cognition:2 predict:2 mapping:1 continuum:1 adopt:1 early:3 omitted:1 perceived:1 encapsulated:1 diminishes:1 overt:9 label:2 weighted:1 minimization:5 pekar:1 behaviorally:1 gaussian:1 aim:1 varying:8 timevarying:1 publication:1 caramazza:1 derived:1 noisiness:1 frazier:2 indicates:1 likelihood:3 ratcliff:4 contrast:4 inference:8 stopping:4 accumulated:1 typically:1 bt:13 initially:3 diminishing:1 hidden:1 favoring:1 reproduce:1 overall:1 among:1 priori:1 animal:4 integration:1 special:1 fairly:1 fabre:2 rosa:1 saving:1 sampling:1 represents:2 yu:3 afc:67 jones:1 discrepancy:2 future:1 report:1 stimulus:31 gordon:1 richard:1 equiprobable:1 thorpe:4 others:1 individual:1 hanes:1 bw:1 restraint:1 decisional:1 investigate:3 marlot:1 introduces:1 pradeep:1 light:1 tj:1 predefined:2 capable:1 necessary:1 moehlis:1 shorter:5 unless:1 walk:4 logan:1 theoretical:1 psychological:2 increased:2 instance:1 earlier:1 gn:1 strategic:3 mac:1 cost:31 rare:1 delay:9 reported:1 varies:1 chooses:3 fundamental:3 kubiak:1 systematic:1 laming:1 again:1 postulate:1 reflect:1 kirchner:1 containing:1 choose:2 collapsing:1 cognitive:13 leading:1 account:7 suggesting:2 includes:1 register:3 explicitly:1 onset:1 stream:3 depends:1 performed:3 later:2 aron:1 observing:2 red:1 start:2 bayes:8 parallel:1 accuracy:5 wiener:1 variance:1 responded:1 judgment:2 identify:1 yes:5 conceptually:1 generalize:2 bayesian:8 iid:1 carlo:1 published:1 explain:3 reach:2 frequency:4 involved:1 naturally:3 associated:9 rational:3 sampled:1 stop:7 exacerbate:1 ask:1 reflecting:1 appears:1 exceeded:2 higher:13 follow:1 response:85 formulation:4 though:1 just:1 implicit:7 stage:1 until:9 correlation:1 hand:1 lack:1 quality:1 indicated:2 normatively:1 usa:1 effect:2 roitman:2 normalized:1 true:1 brown:1 former:1 impatience:2 symmetric:2 iteratively:2 whereby:2 m:3 stone:1 confusion:1 covert:2 image:5 novel:2 common:2 cohen:1 cerebral:1 pshenoy:1 accumulate:2 significant:5 stochasticity:1 language:1 f0:3 nieuwenhuis:3 longer:1 cortex:1 posterior:1 showed:6 recent:1 thresh:1 optimizing:1 jolla:2 forcing:1 scenario:1 manipulation:1 certain:1 continue:8 inverted:1 dosher:1 additional:7 greater:2 paradigm:4 wolfowitz:1 dashed:3 faster:5 characterized:1 af:1 cross:1 long:1 academic:1 host:1 deadline:27 equally:1 manipulate:1 prediction:6 variant:8 underlies:1 wald:2 vision:2 expectation:1 wurtz:1 yeung:1 represent:1 addition:3 whereas:1 else:2 extra:3 rest:1 biased:2 exhibited:1 usher:1 isolate:2 subject:18 undergo:1 neuropsychologia:1 emitted:2 curious:1 identically:1 variety:1 fit:17 psychology:4 timesteps:3 competing:1 gng:79 luce:2 absent:1 whether:7 inadequacy:1 passed:1 ditterich:1 action:6 generally:1 involve:1 ddm:33 induces:1 category:7 mcclelland:1 reduced:2 continuation:1 specifies:1 generate:1 dotted:1 neuroscience:3 per:1 waiting:5 threshold:46 nevertheless:2 diffusion:6 fize:1 nogo:42 concreteness:1 fraction:2 letter:2 uncertainty:1 respond:4 putative:1 decision:75 bound:2 gomez:9 activity:2 eagle:1 adapted:1 constraint:1 precisely:1 x2:2 aspect:4 speed:2 performing:1 department:2 signficantly:1 combination:2 poor:1 march:1 terminates:4 across:4 partitioned:1 rev:3 making:30 alse:1 schall:1 den:1 restricted:2 mack:3 recourse:1 computationally:1 previously:5 turn:1 sperling:2 mechanism:2 conferred:1 needed:1 mechanistic:1 end:1 probe:1 away:2 save:1 alternative:5 rp:1 angela:1 responding:3 include:1 cf:1 sommer:1 opportunity:2 instant:1 approximating:1 unchanged:1 objective:2 parametric:1 fa:2 rt:21 exhibit:2 simulated:1 me:1 toward:4 assuming:1 modeled:1 relationship:2 palmeri:3 ratio:1 minimizing:1 insufficient:1 troubling:1 mostly:1 executed:1 hasegawa:1 negative:2 rise:2 ridderinkhof:1 design:1 reliably:1 sprt:3 policy:23 implementation:1 vertical:1 observation:3 neuron:2 finite:1 immediate:1 extended:2 variability:2 ucsd:2 omission:3 august:1 drift:6 provenance:1 inferred:1 pair:3 required:2 california:2 elapsed:1 delorme:1 address:1 suggested:2 below:1 perception:5 pattern:1 including:2 memory:3 interpretability:1 belief:8 power:1 bacon:2 representing:1 improve:1 aston:1 brief:1 imply:1 stuphorn:1 psychol:3 review:3 literature:1 prior:3 relative:2 fully:2 loss:2 expect:1 segraves:1 versus:2 degree:2 consistent:7 nondecision:6 shadlen:4 principle:2 share:2 prone:1 penalized:1 free:1 bias:29 formal:2 allow:1 explaining:2 wide:1 peterson:1 leaky:1 distributed:1 van:1 boundary:11 overcome:1 cumulative:1 donders:1 sensory:15 commonly:1 collection:1 san:2 simplified:3 replicated:1 counted:2 far:4 correlate:2 sj:1 ml:1 reproduces:1 incoming:1 handbook:1 iterative:4 why:1 disambiguate:1 nature:1 ca:2 mace:1 did:1 linearly:1 terminated:2 neurosci:3 noise:3 alarm:14 arise:1 body:1 augmented:4 x1:2 probing:1 exceeding:2 explicit:2 psychologica:1 perceptual:8 third:1 annu:1 erroneous:2 xt:7 normative:1 appeal:1 offset:5 ajyu:1 concern:1 evidence:19 false:12 sequential:5 kr:1 magnitude:2 conditioned:2 illustrates:2 simply:1 explore:2 likely:2 visual:3 adjustment:3 contained:1 subtlety:1 ch:2 identity:4 formulated:1 presentation:1 viewed:1 goal:1 towards:9 shared:2 experimentally:1 change:4 diminished:1 specifically:3 determined:1 miss:7 principal:2 engaged:1 tendency:1 la:2 experimental:10 shedding:1 formally:2 select:1 support:1 latter:1 modulated:1 arises:1 phenomenon:2 |
3,884 | 4,516 | Discriminative Learning of Sum-Product Networks
Robert Gens
Pedro Domingos
Department of Computer Science and Engineering
University of Washington
Seattle, WA 98195-2350, U.S.A.
{rcg,pedrod}@cs.washington.edu
Abstract
Sum-product networks are a new deep architecture that can perform fast, exact inference on high-treewidth models. Only generative methods for training SPNs
have been proposed to date. In this paper, we present the first discriminative
training algorithms for SPNs, combining the high accuracy of the former with
the representational power and tractability of the latter. We show that the class
of tractable discriminative SPNs is broader than the class of tractable generative
ones, and propose an efficient backpropagation-style algorithm for computing the
gradient of the conditional log likelihood. Standard gradient descent suffers from
the diffusion problem, but networks with many layers can be learned reliably using ?hard? gradient descent, where marginal inference is replaced by MPE inference (i.e., inferring the most probable state of the non-evidence variables). The
resulting updates have a simple and intuitive form. We test discriminative SPNs
on standard image classification tasks. We obtain the best results to date on the
CIFAR-10 dataset, using fewer features than prior methods with an SPN architecture that learns local image structure discriminatively. We also report the highest
published test accuracy on STL-10 even though we only use the labeled portion
of the dataset.
1
Introduction
Probabilistic models play a crucial role in many scientific disciplines and real world applications.
Graphical models compactly represent the joint distribution of a set of variables as a product of factors normalized by the partition function. Unfortunately, inference in graphical models is generally
intractable. Low treewidth ensures tractability, but is a very restrictive condition, particularly since
the highest practical treewidth is usually 2 or 3 [2, 9]. Sum-product networks (SPNs) [23] overcome
this by exploiting context-specific independence [7] and determinism [8]. They can be viewed as a
new type of deep architecture, where sum layers alternate with product layers. Deep networks have
many layers of hidden variables, which greatly increases their representational power, but inference
with even a single layer is generally intractable, and adding layers compounds the problem [3].
SPNs are a deep architecture with full probabilistic semantics where inference is guaranteed to be
tractable, under general conditions derived by Poon and Domingos [23]. Despite their tractability,
SPNs are quite expressive [16], and have been used to solve difficult problems in vision [23, 1].
Poon and Domingos introduced an algorithm for generatively training SPNs, yet it is generally
observed that discriminative training fares better. By optimizing P (Y|X) instead of P (X, Y) conditional random fields retain joint inference over dependent label variables Y while allowing for
flexible features over given inputs X [22]. Unfortunately, the conditional partition function Z(X)
is just as prone to intractability as with generative training. For this reason, low treewidth models
(e.g. chains and trees) of Y are commonly used. Research suggests that approximate inference can
make it harder to learn rich structured models [21]. In this paper, discriminatively training SPNs
will allow us to combine flexible features with fast, exact inference over high treewidth models.
1
With inference and learning that easily scales to many layers, SPNs can be viewed as a type of
deep network. Existing deep networks employ discriminative training with backpropagation through
softmax layers or support vector machines over network variables. Most networks that are not purely
feed-forward require approximate inference. Poon and Domingos showed that deep SPNs could be
learned faster and more accurately than deep belief networks and deep Boltzmann machines on a
generative image completion task [23]. This paper contributes a discriminative training algorithm
that could be used on its own or with generative pre-training.
For the first time we combine the advantages of SPNs with those of discriminative models. In this
paper we will review SPNs and describe the conditions under which an SPN can represent the conditional partition function. We then provide a training algorithm, demonstrate how to compute the
gradient of the conditional log-likelihood of an SPN using backpropagation, and explore variations
of inference. Finally, we show state-of-the-art results where a discriminatively-trained SPN achieves
higher accuracy than SVMs and deep models on image classification tasks.
2
Sum-Product Networks
SPNs were introduced with the aim of identifying the most expressive tractable representation possible. The foundation for their work lies in Darwiche?s network polynomial [14]. We define an unnormalized probability distribution ?(x) ? 0 over a vector of Boolean variables X. The indicator
? i ] as xi and
function [.] is one when its argument is true and zero otherwise; we abbreviate [Xi ] and [X
x
?i . To distinguish random variables from indicator variables, we use roman font for the former and
italic for the latter. Vectors of variables are denoted
P by bold
Q roman andQbold italic font, respectively.
The network polynomial of ?(x) is defined as x ?(x) (x), where (x) is the product of indicators that are one in state x. For example, the network polynomial of the Bayesian network X1 ? X2
is P (x1 )P (x2 |x1 )x1 x2 + P (x1 )P (?
x2 |x1 )x1 x
?2 + P (?
x1 )P (x2 |?
x1 )?
x1 x2 + P (?
x1 )P (?
x2 |?
x1 )?
x1 x
?2 . To
compute P (X1 = true, X2 = false), we access the corresponding term of the network polynomial
by setting indicators x1 and x
?2 to one and the rest to zero. To find P (X2 = true), we fix evidence on
X2 by setting x2 to one and x
?2 to zero and marginalize X1 by setting both x1 and x
?1 to one. Notice
that there are two reasons we might set an indicator xi = 1: (1) evidence {Xi = true}, in which
case we set x
?i = 0 and (2) marginalization of Xi , where x
?i = 1 as well. In general the role of an
indicator xi is to determine whether terms compatible with variable state Xi = true are included in
the summation, and similarly for x
?i . With this notation, the partition function Z can be computed
by setting all indicators of all variables to one.
The network polynomial has size exponential in the number of variables, but in many cases it can
be represented more compactly using a sum-product network [23, 14].
Definition 1. (Poon & Domingos, 2011) A sum-product network (SPN) over variables X1 , . . . , Xd
is a rooted directed acyclic graph whose leaves are the indicators x1 , . . . , xd and x
?1 , . . . , x
?d and
whose internal nodes are sums and products. Each edge (i, j) emanating from a sum node i has a
non-negative weight wij . The
P value of a product node is the product of the values of its children.
The value of a sum node is j?Ch(i) wij vj , where Ch(i) are the children of i and vj is the value of
node j. The value of an SPN S[x1 , x
? 1 , . . . , xd , x
?d ] is the value of its root.
2
+
+
If we could replace the exponential sum over
+
variable states in the partition function with
0.8
0.2
the linear evaluation of the network, inference
would be tractable. For example, the SPN
in Figure 1 represents the joint probability of
three Boolean variables P (X1 , X2 , X3 ) in the
Bayesian network X2 ? X1 ? X3 using six
x1
x1
+ 0.7 0.5 +
+ 0.4 0.9 +
indicators S[x1 , x
?1 , x2 , x
?2 , x3 , x
?3 ]. To com0.3
0.5
0.6
0.1
pute P (X1 = true), we could sum over the
joint states of X2 and X3 , evaluating the netx2
x2
x3
x3
work a total of four times S[1, 0, 0, 1, 0, 1]+. . .+
S[1, 0, 1, 0, 1, 0]. Instead, we set the indicators
so that the network sums out both X2 and X3 . Figure 1: SPN over Boolean variables X1 , X2 , X3
An indicator setting of S[1,0,1,1,1,1] computes
the sum over all states compatible with our evidence e = {X1 = true} and requires only one evaluation.
However, not every SPN will have this property. If a linear evaluation of an SPN with indicators
set to represent evidence equals the exponential sum over all variable states consistent with that
evidence, the SPN is valid.
Definition 2. (Poon & Domingos, 2011) A sum-product network S is valid iff S(e) = ?S (e) for all
evidence e.
In their paper, Poon and Domingos prove that there are two conditions sufficient for validity: completeness and consistency.
Definition 3. (Poon & Domingos, 2011) A sum-product network is complete iff all children of the
same sum node have the same scope.
Definition 4. (Poon & Domingos, 2011) A sum-product network is consistent iff no variable appears
negated in one child of a product node and non-negated in another.
Theorem 1. (Poon & Domingos, 2011) A sum-product network is valid if it is complete and consistent.
The scope of a node is defined as the set of variables that have indicators among the node?s descendants. To ?appear in a child? means to be among that child?s descendants. If a sum node
is incomplete, the SPN will undercount the true marginals. Since an incomplete sum node has
scope larger than a child, that child will be non-zero for more than one state of the sum (e.g. if
S[x1 , x
?1 , x2 , x
?2 ] = (x1 + x2 ), S[1, 0, 1, 1] < S[1, 0, 1, 0] + S[1, 0, 0, 1]). If a product node is inconsistent, the SPN will overcount the marginals as it will incorporate impossible states (e.g. x1 ? x
?1 )
into its computation.
Poon and Domingos show how to generatively train the parameters of an SPN. One method is to
compute the likelihood gradient and optimize with gradient descent (GD). They also show how
to use expectation maximization (EM) by considering each sum node as the marginalization of a
hidden variable [17]. They found that online EM using most probable explanation (MPE or ?hard?)
inference worked the best for their image completion task.
Gradient diffusion is a key issue in training deep models. It is commonly observed in neural networks that when the gradient is propagated to lower layers it becomes less informative [3]. When
every node in the network takes fractional responsibility for the errors of a top level node, it becomes difficult to steer parameters out of local minima. Poon and Domingos also saw this effect
when using gradient descent and EM to train SPNs. They found that online hard EM could provide
a sparse but strong learning signal to synchronize the efforts of upper and lower nodes. Note that
hard training is not exclusive to EM. In the next section we show how to discriminatively train SPNs
with hard gradient descent.
3
Discriminative Learning of SPNs
We define an SPN S[y, h|x] that takes as input three disjoint sets of variables H, Y, and X (hidden,
query, and given). We denote the setting of all h indicator functions to 1 as S[y, 1|x], where the
bold 1 is a vector. We do not sum over states of given variables X when discriminatively training
SPNs. Given an instance, we treat X as constants. This means that one ignores X variables in the
scope of a node when considering completeness and consistency. Since adding a constant as a child
to a product node cannot make that product inconsistent, a variable x can be the child of any product
node in a valid SPN. To maintain completeness, x can only be the child of a sum node that has scope
outside of Y or H.
Algorithm 1: LearnSPN
Input: Set D of instances over variables X and label variables Y, a valid SPN S with initialized parameters.
Output: An SPN with learned weights
repeat
forall the d ? D do
UpdateWeights(S, Inference(S,xd ,yd ))
until convergence or early stopping condition;
3
The parameters of an SPN can be learned using an online procedure as in Algorithm 1 as proposed
by Poon and Domingos. The three dimensions of the algorithm are generative vs. discriminative,
the inference procedure, and the weight update. Poon and Domingos discussed generative gradient
descent with marginal inference as well as EM with marginal and MPE inference. In this section we
will derive discriminative gradient descent with marginal and MPE inference, where hard gradient
descent can also be used for generative training. EM is not typically used for discriminative training
as it requires modification to lower bound the conditional likelihood [25] and there may not be a
closed form for the M-step.
3.1
Discriminative Training with Marginal Inference
A component of the gradient of the conditional log likelihood takes the form
?
log P (y|x)
?w
=
X
X
?
?
log
?(Y = y, H = h|x) ?
log
?(Y = y0 , H = h|x)
?w
?w
0
=
1
?S[y, 1|x]
1
?S[1, 1|x]
?
S[y, 1|x]
?w
S[1, 1|x]
?w
y ,h
h
where the two summations are separate bottom-up evaluations of the SPN with indicators set as
S[y, 1|x] and S[1, 1|x], respectively.
The partial derivatives of the SPN with respect to all weights can be computed with backpropagation,
detailed in Algorithm 2. After performing a bottom-up evaluation of the SPN, partial derivatives are
passed from parent to child as follows from the chain rule and described in [15]. The form of
backpropagation presented takes time linear in the number of nodes in the SPN if product nodes
have a bounded number of children.
Our gradient descent update then follows the direction of the partial derivative of the conditional
?
log P (y|x). After each gradient step we optionally
log likelihood with learning rate ?: ?w = ? ?w
renormalize the weights of a sum node so they sum to one. Empirically we have found this to produce the best results. The second SPN evaluation that marginalizes H and Y can reuse computation
from the first, for example, when Y is modeled by a root sum node. In this case the values of all
non-root nodes are equivalent between the two evaluations. For any architecture, one can memoize
values of nodes that do not have a query variable indicator as a descendant.
Algorithm 2: BackpropSPN
Input: A valid SPN S, where Sn denotes the value of node n after bottom-up evaluation.
?S
?S
Output: Partial derivatives of the SPN with respect to every node ?S
and weight ?w
n
i,j
?S
= 0 except ?S
=1
Initialize all ?S
?S
n
forall the n ? S in top-down order do
if n is a sum node then
forall the j ? Ch(n) do
?S
?S
?S
? ?S
+ wn,j ?S
?Sj
n
j
?S
?wn,j
?S
? Sj ?S
n
else
forall the j ? Ch(n) doQ
?S
?S
?S
? ?S
+ ?S
k?Ch(n)\{j} Sk
?Sj
n
j
3.2
Discriminative Training with MPE Inference
There are several reasons why MPE inference is appealing for discriminatively training SPNs. As
discussed above, hard inference was crucial for overcoming gradient diffusion when generatively
training SPNs. For many applications the goal is to predict the most probable structure, and therefore
it makes sense to use this also during training. Finally, it is common to approximate summations
with maximizations for reasons of speed or tractability. Though summation in SPNs is fast and
exact, MPE inference is still faster. We derive discriminative gradient descent using MPE inference.
4
f
f
f
f
f
f
f
f
f
f
f
f
f
f
f
f
f
f
f
f
f
f
f
+
f
f
f
+
+
+
+
f
+
+
+
+
+
+
+
+
+
+
+
+
f
+
+
+
+
+
+
+
+
+
+
+
f
+
+
+
+
+
+
+
+
f
f
f
f
f
f
f
Figure 2: Positive and negative terms in the hard gradient. The root node sums out the variable Y,
the two sum nodes on the left sum out the hidden variable H1 , the two sum nodes on the right sum
out H2 , and a circled ?f? denotes an input variable Xi . Dashed lines indicate negative elements in
the gradient.
We define a max-product network (MPN) M [y, h|x] based on the max-product
semiring. This
Q
network compactly represents the maximizer polynomial maxx ?(x) (x), which computes the
MPE [15]. To convert an SPN to an MPN, we replace each sum node by a max node, where weights
on children are retained. The gradient of the conditional log likelihood with MPE inference is then
?
log P? (y|x)
?w
=
?
?
log max ?(Y = y, H = h|x) ?
log max
?(Y = y0 , H = h|x)
h
y0 ,h
?w
?w
where the two maximizations are computed by M [y, 1|x] and M [1, 1|x]. MPE inference also
consists of a bottom-up evaluation followed by a top-down pass. Inference yields a branching path
through the SPN called a complete subcircuit that includes an indicator (and therefore assignment)
for every variable [15]. Analogous to Viterbi decoding, the path starts at the root node and at each
max (formerly sum) node it only travels to the max-valued child. At product nodes, the path branches
1
to all children. We define W as the
Q multisetciof weights traversed by this path . The value of the
MPN takes the form of a product wi ?W wi , where ci is the number of times wi appears in W .
The partial derivatives of the MPN with respect to all nodes and weights is computed by Algorithm
2 modified to accommodate MPNs: (1) S becomes M , (2) when n is a sum node, the body of the
forall loop is run once for j as the max-valued child.
The partial derivative of the logarithm of an MPN with respect to a weight takes the form
Q
c
ci ? wici ?1 wj ?W \{wi } wj j
? log M ?M
1 ?M
? log M
=
=
=
Q
cj
?wi
?M ?wi
M ?wi
w ?W wj
j
=
ci
wi
The gradient of the conditional log likelihood with MPE inference is therefore ?ci /wi , where
?ci = c0i ? c00i is the difference between the number of times wi is traversed by the two MPE
inference paths in M [y, 1|x] and M [1, 1|x], respectively. The hard gradient update is then ?wi =
?
i
? ?w
log P? (y|x) = ? ?c
wi .
i
The hard gradient for a training instance (xd , yd ) is illustrated in Figure 2. In the first two expressions, the complete subcircuit traveled by each MPE inference is shown in bold. Product nodes do
not have weighted children, so they do not appear in the gradient, depicted in the last expression
We can also easily add regularization to SPN training. An L2 weight penalty takes the familiar
form of ??||w||2 and partial derivatives ?2?wi can be added to the gradient. With an appropriate
optimization method, an L1 penalty could also be used for learning with marginal inference on dense
SPN architectures. However, sparsity is not as important for SPNs as it is for Markov random fields,
where a non-zero weight can have outsize impact on inference time; with SPNs inference is always
linear with respect to model size.
A summary of the variations of Algorithm 1 is provided in Tables 1 and 2. The generative hard
gradient can be used in place of online EM for datasets where it would be prohibitive to store
inference results from past epoch. For architectures that have high fan-in sum nodes, soft inference
may be able to separate groups of modes faster than hard inference, which can only alter one child
of a sum node at a time.
We observe the similarity between the updates of hard EM and hard gradient descent. In particular,
0
if we reparameterize the SPN so that each child of a sum node is weighted by wi = ewi , the form of
1
A consistent SPN allows for MPE inference to reach the same indicator more than once in the same
branching path
5
Node
Sum
?S
?Sn
Product
?S
?Sn
Weight
?S
?wki
Table 1: Inference procedures
Soft
Inference
Inference
P
Q
PHard?M
Q
?S
?M
=
=
S
Ml
l
?Sk
?Mn
?Mk
k?P a(n)
l?Ch(k)\{n}
k?P a(n)
l?Ch(k)\{n}
?M
P
P
: wkn ? W
wkn ?M
?M
?S
k
=
=
wkn ?S
?Mn
k
0
:
otherwise
k?P a(n)
k?P a(n)
=
Update
Gen. GD
Gen. EM
Disc. GD
?M
?wki
?S
?Sk Si
=
?M
?Mk Mi
Table 2: Weight updates
Soft Inference
Hard Inference
ci
?w = ? ?S[x,y]
?w
=
?
i
?w
wi
1 : wki ? W
P (Hk = i|x, y) ? wki ?S[x,y]
P
(H
=
i|x,
y)
=
k
?Sk
0 : otherwise
?S[y,1|x]
?ci
1
?w = ? S[y,1|x] ?w ?
?wi = ? wi
?S[1,1|x]
1
S[1,1|x]
?w
the partial derivative of the log MPN becomes
Q
0
ci w0 ?W 0 ecj ?wj
? log M
1 ?M
j
= ci
=
= Q
cj ?wj0
?wi0
M ?wi0
0
0 e
wj ?W
This means that the hard gradient update for weights in logspace is ?wi0 = ?ci , which resembles
structured perceptron [13].
4
Experiments
We have applied discriminative training of SPNs to image classification benchmarks. CIFAR-10
and STL-10 are standard datasets for deep networks and unsupervised feature learning. Both are
10-class small image datasets. We achieve the best results to date on both tasks.
We follow the feature extraction pipeline of Coates et al. [10], which was also used recently to
learn pooling functions [20]. The procedure consists of extracting 4 ? 105 6x6 pixel patches from
the training set images, ZCA whitening those patches [19], running k-means for 50 rounds, and
then normalizing the dictionary to have zero mean and unit variance. We then use the dictionary
to extract K features at every 6x6 pixel site in the image (unit stride) with the ?triangle? encoding
fk (x) = max{0, z? ? zk }, where zk = ||x ? ck ||2 , ck is the k-th item in the dictionary, and z? is the
average zk . For each image of CIFAR-10, for example, this yields a 27 ? 27 ? K feature vector that
is finally downsampled by max-pooling to a G ? G ? K feature vector.
We experiment with a simple architecture that
Classes
allows for discriminative learning of local
+
structure. This architecture cannot be generParts
x
atively trained as it violates consistency over
X. Inspired by the successful star models in
Mixture
Felzenszwalb et al. [18], we construct a net+
Location
work with C classes, P parts per class, and T
+
mixture components per part. A part is a pattern of image patch features that can occur anywhere in the image (e.g. an arrangement of
e x ij ? f111
patches that defines a curve). Each part filter
WxWxK
f~cpt is of dimension W ? W ? K and is iniGxGxK
tialized to ~0. The root of the SPN is a sum node
with a child Sc for each class c in the dataset
multiplied by the indicator for that state of the Figure 3: SPN architecture for experiments. Hidlabel variable Y. Sc is a product over P nodes den variable indicators omitted for legibility.
Scp , where each Scp is a sum node over T nodes
6
Scpt . The hidden variables H represent the choice of cluster in the mixture over a part and its position (Scp and Scpt , respectively). Finally, Scpt sums over positions i, j in the image of the logistic
~
function e~xij ?fcpt where the given variable ~xij is the same dimension as f and parts can overlap.
Notice that the mixture Scp models an additional level of spatial structure on top of the image patch
features learned by k-means. Coates and Ng [12] also learn higher-order structure, but whereas our
method learns structure discriminatively in the context of a parts-based model, their unsupervised
algorithm greedily groups features based on correlation and is unable to learn mixtures. Compared
with the pooling functions in Jia et al. [20] that model independent translation of patch features,
our architecture models how nearby features move together. Other deep probabilistic architectures
should be able to model high-level structure, but considering the difficulty in training these models
with approximate inference, it is hard to make full use of their representational power. Unlike the
star model of Felzenswalb et al. [18] that learns filters over predefined HOG image features, our
SPN learns on top of learned image features that can model color and detailed patterns.
Generative SPN architectures on the same features produce unsatisfactory results as generative training is led astray by the large number of features, very few of which differentiate labels. In the generative SPN paper [23], continuous variables are modeled with univariate Gaussians at the leaves
(viewed as a sum node with infinite children but finite weight sum). With discriminative training, X
can be continuous because we always condition on it, which effectively folds it into the weights.
All networks are learned with stochastic gradient descent regularized by early stopping. We found
that using marginal inference for the root node and MPE inference for the rest of the network worked
best. This allows the SPN to continue learning the difference between classes even when it correctly
classifies a training instance. The fraction of the training set reserved for validation with CIFAR10 and STL-10 were 10% and 20%, respectively. Learning rates, P , and T were chosen based on
validation set performance.
4.1
Results on CIFAR-10
CIFAR-10 consists of 32x32 pixel images: 5 ? 104 for training and 104 for testing. We first compare
discriminative SPNs with other methods as we vary the size of the dictionary K. The results are
seen in Figure 4. To fairly compare with recent work [10, 20] we also set G = 4. In general,
we observe that SPNs can achieve higher performance using half as many features as the next best
approach, the learned pooling function. We hypothesize that this is because the SPN architecture
allows us to discriminatively train large moveable parts, image structure that cannot be captured by
larger dictionaries. In Jia et al. [20] the pooling functions blur individual features (i.e. a 6x6 pixel
dictionary item), from which the classifier may have trouble inferring the coordination of image
parts.
We then experimented with a finer grid and fewer dictionary items (G = 7, K = 400). Pooling
functions destroy information, so it is better if less is done before learning. Finer grids are less
feasible for the method in Jia et al. [20] as the number of rectangular pooling functions grows
O(G4 ). Our best test accuracy of 83.96% was achieved with W = 3, P = 200, and T = 2, chosen
Performance on CIFAR-10
84
Accuracy
80
76
72
Discriminative SPN
Learned Pooling, Jia et al.
K-means (tri.), white, Coates et al.
Auto-encoder, raw, Coates et al.
RBM, whitened, Coates et al.
68
64
200
400
800
1600
Dictionary Size
4000
Figure 4: Impact of dictionary size K with a 4x4 pooling grid (W =3) on CIFAR-10 test accuracy
7
Table 3: Test accuracies on CIFAR-10.
Method
Logistic Regression [24]
SVM [5]
SIFT [5]
mcRBM [24]
mcRBM-DBN [24]
Convolutional RBM [10]
K-means (Triangle) [10]
HKDES [4]
3-Layer Learned RF [12]
Learned Pooling [20]
Discriminative SPN
Dictionary
4000, 4x4 grid
1600, 9x9 grid
6000, 4x4 grid
400, 7x7 grid
Accuracy
36.0%
39.5%
65.6%
68.3%
71.0%
78.9%
79.6 %
80.0%
82.0%
83.11%
83.96%
Table 4: Comparison of average test accuracies on all folds of STL-10.
Method
Accuracy (??)
1-layer Vector Quantization [11]
54.9% (? 0.4%)
1-layer Sparse Coding [11]
59.0% (? 0.8%)
3-layer Learned Receptive Field [12] 60.1% (? 1.0%)
Discriminative SPN
62.3% (? 1.0%)
by validation set performance. This architecture achieves the highest published test accuracy on the
CIFAR-10 dataset, remarkably using one fifth the number of features of the next best approach. We
compare top CIFAR-10 results in Table 3, highlighting the dictionary size of systems that use the
feature extraction from Coates et al. [10].
4.2
Results on STL-10
STL-10 has larger 96x96 pixel images and less labeled data (5,000 training and 8,000 test) than
CIFAR-10 [10]. The training set is mapped to ten predefined folds of 1,000 images. We experimented on the STL-10 dataset in a manner similar to CIFAR-10, ignoring the 105 items of unlabeled
data. Ten models were trained on the pre-specified folds, and test accuracy is reported as an average. With K=1600, G=8, W =4, P =10, and T =3 we achieved 62.3% (? 1.0% standard deviation
among folds), the highest published test accuracy as of writing. Notably, this includes approaches
that make use of the unlabeled training images. Like Coates and Ng [12], our architecture learns
local relations among different feature maps. However, the SPN is able to discriminatively learn
latent mixtures, which can encode a more nuanced decision boundary than the linear classifier used
in their work. After we carried out our experiments, Bo et al. [6] reported a higher accuracy with
their unsupervised features and a linear SVM. Just as with the features of Coates et al. [10], we
anticipate that using an SPN instead of the SVM would be beneficial by learning spatial structure
that the SVM cannot model.
5
Conclusion
Sum-product networks are a new class of probabilistic model where inference remains tractable despite high treewidth and many hidden layers. This paper introduced the first algorithms for learning
SPNs discriminatively, using a form of backpropagation to compute gradients. Discriminative training allows for a wider variety of SPN architectures than generative training, because completeness
and consistency do not have to be maintained over evidence variables. We proposed both ?soft?
and ?hard? gradient algorithms, using marginal inference in the ?soft? case and MPE inference in
the ?hard? case. The latter successfully combats the diffusion problem, allowing deep networks to
be learned. Experiments on image classification benchmarks illustrate the power of discriminative
SPNs.
Future research directions include applying other discriminative learning paradigms to SPNs (e.g.
max-margin methods), automatically learning SPN structure, and applying discriminative SPNs to
a variety of structured prediction problems.
Acknowledgments: This research was partly funded by ARO grant W911NF-08-1-0242, AFRL
contract FA8750-09-C-0181, NSF grant IIS-0803481, and ONR grant N00014-12-1-0312. The
views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ARO, AFRL,
NSF, ONR, or the United States Government.
8
References
[1] M. Amer and S. Todorovic. Sum-product networks for modeling activities with stochastic structure.
CVPR, 2012.
[2] F. Bach and M.I. Jordan. Thin junction trees. Advances in Neural Information Processing Systems,
14:569?576, 2002.
[3] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1?127,
2009.
[4] L. Bo, K. Lai, X. Ren, and D. Fox. Object recognition with hierarchical kernel descriptors. In Computer
Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1729?1736. IEEE, 2011.
[5] L. Bo, X. Ren, and D. Fox. Kernel descriptors for visual recognition. Advances in Neural Information
Processing Systems, 2010.
[6] L. Bo, X. Ren, and D. Fox. Unsupervised feature learning for RGB-D based object recognition. ISER,
2012.
[7] C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence in bayesian
networks. In Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence, pages 115?
123, 1996.
[8] M. Chavira and A. Darwiche. On probabilistic inference by weighted model counting. Artificial Intelligence, 172(6-7):772?799, 2008.
[9] A. Chechetka and C. Guestrin. Efficient principled learning of thin junction trees. In J.C. Platt, D. Koller,
Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20. MIT Press,
Cambridge, MA, 2008.
[10] A. Coates, H. Lee, and A.Y. Ng. An analysis of single-layer networks in unsupervised feature learning.
In aistats11. Society for Artificial Intelligence and Statistics, 2011.
[11] A. Coates and A.Y. Ng. The importance of encoding versus training with sparse coding and vector
quantization. In International Conference on Machine Learning, volume 8, page 10, 2011.
[12] A. Coates and A.Y. Ng. Selecting receptive fields in deep networks. NIPS, 2011.
[13] M. Collins. Discriminative training methods for hidden Markov models: Theory and experiments with
perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 1?8, Philadelphia, PA, 2002. ACL.
[14] A. Darwiche. A differential approach to inference in Bayesian networks. Journal of the ACM, 50:280?
305, 2003.
[15] A. Darwiche. Modeling and Reasoning with Bayesian Networks. Cambridge University Press, 2009.
[16] O. Delalleau and Y. Bengio. Shallow vs. deep sum-product networks. In Proceedings of the 25th Conference on Neural Information Processing Systems, 2011.
[17] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society, Series B, 39:1?38, 1977.
[18] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part
model. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1?8.
Ieee, 2008.
[19] A. Hyv?arinen and E. Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4-5):411?430, 2000.
[20] Y. Jia, C. Huang, and T. Darrell. Beyond spatial pyramids: Receptive field learning for pooled image
features. In CVPR, 2012.
[21] A. Kulesza, F. Pereira, et al. Structured learning with approximate inference. Advances in Neural Information Processing Systems, 20:785?792, 2007.
[22] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages
282?289, Williamstown, MA, 2001. Morgan Kaufmann.
[23] H. Poon and P. Domingos. Sum-product networks: A new deep architecture. In Proc. 12th Conf. on
Uncertainty in Artificial Intelligence, pages 337?346, 2011.
[24] M.A. Ranzato and G.E. Hinton. Modeling pixel means and covariances using factorized third-order
Boltzmann machines. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on,
pages 2551?2558. IEEE, 2010.
[25] J. Saloj?arvi, K. Puolam?aki, and S. Kaski. Expectation maximization algorithms for conditional likelihoods. In Proceedings of the 22nd international conference on Machine learning, pages 752?759. ACM,
2005.
9
| 4516 |@word polynomial:6 nd:1 twelfth:1 hyv:1 memoize:1 rgb:1 covariance:1 accommodate:1 harder:1 generatively:3 series:1 united:1 selecting:1 document:1 fa8750:1 past:1 existing:1 si:1 yet:1 partition:5 informative:1 blur:1 hypothesize:1 update:8 v:2 generative:13 fewer:2 leaf:2 prohibitive:1 item:4 half:1 intelligence:4 mccallum:1 completeness:4 node:50 location:1 chechetka:1 differential:1 descendant:3 prove:1 consists:3 combine:2 manner:1 darwiche:4 g4:1 notably:1 inspired:1 automatically:1 considering:3 becomes:4 provided:1 classifies:1 notation:1 bounded:1 wki:4 factorized:1 interpreted:1 combat:1 every:5 xd:5 classifier:2 platt:1 unit:2 grant:3 ramanan:1 appear:2 segmenting:1 positive:1 before:1 engineering:1 local:4 treat:1 despite:2 encoding:2 path:6 yd:2 might:1 acl:1 resembles:1 suggests:1 directed:1 practical:1 acknowledgment:1 testing:1 backpropagation:6 x3:8 com0:1 procedure:4 empirical:1 maxx:1 pre:2 downsampled:1 cannot:4 marginalize:1 unlabeled:2 context:3 impossible:1 writing:1 applying:2 optimize:1 equivalent:1 map:1 eighteenth:1 rectangular:1 identifying:1 x32:1 rule:1 variation:2 analogous:1 play:1 exact:3 domingo:15 pa:1 element:1 trend:1 recognition:6 particularly:1 labeled:2 observed:2 role:2 bottom:4 logspace:1 moveable:1 wj:5 ensures:1 mpn:6 ranzato:1 highest:4 principled:1 dempster:1 trained:4 purely:1 ewi:1 triangle:2 compactly:3 easily:2 joint:4 represented:1 kaski:1 train:4 fast:3 describe:1 emanating:1 query:2 sc:2 artificial:4 labeling:1 outside:1 quite:1 whose:2 larger:3 solve:1 valued:2 cvpr:5 delalleau:1 otherwise:3 encoder:1 statistic:1 laird:1 online:4 differentiate:1 advantage:1 net:1 propose:1 aro:2 product:33 combining:1 loop:1 date:3 gen:3 poon:14 iff:3 achieve:2 representational:3 roweis:1 deformable:1 intuitive:1 ecj:1 seattle:1 exploiting:1 convergence:1 parent:1 cluster:1 darrell:1 produce:2 object:2 wider:1 derive:2 illustrate:1 completion:2 ij:1 strong:1 c:1 treewidth:6 indicate:1 direction:2 filter:2 stochastic:2 mcallester:1 violates:1 require:1 government:1 arinen:1 fix:1 probable:3 anticipate:1 summation:4 traversed:2 scope:5 predict:1 viterbi:1 achieves:2 early:2 dictionary:11 omitted:1 vary:1 proc:1 travel:1 label:3 coordination:1 saw:1 successfully:1 weighted:3 mit:1 always:2 aim:1 modified:1 ck:2 forall:5 broader:1 encode:1 derived:1 unsatisfactory:1 likelihood:10 greatly:1 hk:1 zca:1 greedily:1 sense:1 inference:52 dependent:1 stopping:2 chavira:1 typically:1 hidden:7 relation:1 koller:2 wij:2 semantics:1 pixel:6 issue:1 classification:4 flexible:2 among:4 denoted:1 art:1 softmax:1 initialize:1 spatial:3 marginal:8 field:6 equal:1 once:2 extraction:2 washington:2 construct:1 ng:5 x4:3 represents:2 unsupervised:5 thin:2 alter:1 future:1 report:1 roman:2 employ:1 few:1 oja:1 individual:1 familiar:1 replaced:1 maintain:1 friedman:1 evaluation:9 mixture:6 chain:2 predefined:2 edge:1 partial:8 cifar10:1 fox:3 tree:3 incomplete:3 logarithm:1 initialized:1 renormalize:1 mk:2 instance:4 soft:5 boolean:3 steer:1 modeling:3 w911nf:1 assignment:1 maximization:4 tractability:4 deviation:1 successful:1 reported:2 gd:3 international:3 retain:1 probabilistic:6 contract:1 lee:1 decoding:1 discipline:1 together:1 fairly:1 x9:1 huang:1 marginalizes:1 conf:1 derivative:8 style:1 stride:1 star:2 bold:3 coding:2 includes:2 pooled:1 root:7 h1:1 responsibility:1 closed:1 mpe:17 view:1 portion:1 start:1 jia:5 accuracy:14 convolutional:1 variance:1 reserved:1 descriptor:2 kaufmann:1 yield:2 bayesian:5 raw:1 accurately:1 disc:1 ren:3 finer:2 published:3 reach:1 suffers:1 definition:4 overcount:1 mi:1 rbm:2 propagated:1 dataset:5 color:1 fractional:1 cj:2 appears:2 feed:1 afrl:2 higher:4 follow:1 x6:3 amer:1 done:1 though:2 c0i:1 just:2 anywhere:1 until:1 correlation:1 x96:1 expressive:2 multiscale:1 maximizer:1 defines:1 mode:1 logistic:2 scientific:1 grows:1 nuanced:1 effect:1 validity:1 normalized:1 true:8 former:2 regularization:1 wi0:3 illustrated:1 white:1 round:1 during:1 branching:2 aki:1 rooted:1 maintained:1 unnormalized:1 wj0:1 complete:4 demonstrate:1 l1:1 reasoning:1 image:24 recently:1 common:1 legibility:1 empirically:1 volume:1 discussed:2 fare:1 marginals:2 cambridge:2 ai:1 consistency:4 fk:1 similarly:1 grid:7 dbn:1 iser:1 language:1 funded:1 pute:1 access:1 similarity:1 whitening:1 add:1 own:1 showed:1 recent:1 optimizing:1 compound:1 store:1 n00014:1 onr:2 continue:1 seen:1 minimum:1 additional:1 captured:1 guestrin:1 morgan:1 determine:1 paradigm:1 signal:1 dashed:1 branch:1 full:2 ii:1 faster:3 bach:1 cifar:12 lai:1 impact:2 prediction:1 regression:1 whitened:1 vision:4 expectation:2 represent:4 kernel:2 pyramid:1 achieved:2 whereas:1 remarkably:1 else:1 crucial:2 rest:2 unlike:1 tri:1 pooling:10 inconsistent:2 lafferty:1 jordan:1 extracting:1 counting:1 bengio:2 wn:2 variety:2 independence:2 marginalization:2 architecture:19 whether:1 six:1 expression:2 passed:1 reuse:1 effort:1 penalty:2 todorovic:1 deep:18 cpt:1 generally:3 boutilier:1 detailed:2 ten:2 svms:1 xij:2 coates:11 nsf:2 notice:2 disjoint:1 per:2 correctly:1 group:2 key:1 four:1 spns:30 diffusion:4 destroy:1 graph:1 fraction:1 sum:51 convert:1 run:1 uncertainty:2 place:1 patch:6 decision:1 layer:15 bound:1 guaranteed:1 distinguish:1 followed:1 fan:1 fold:5 activity:1 occur:1 worked:2 x2:20 nearby:1 x7:1 speed:1 argument:1 reparameterize:1 performing:1 department:1 structured:4 alternate:1 beneficial:1 em:11 y0:3 wkn:3 wi:17 appealing:1 shallow:1 modification:1 den:1 pipeline:1 remains:1 singer:1 tractable:6 junction:2 gaussians:1 multiplied:1 observe:2 hierarchical:1 appropriate:1 top:6 denotes:2 running:1 trouble:1 include:1 graphical:2 spn:46 restrictive:1 society:2 implied:1 move:1 added:1 arrangement:1 font:2 receptive:3 exclusive:1 italic:2 gradient:32 subcircuit:2 separate:2 unable:1 mapped:1 w0:1 astray:1 reason:4 modeled:2 retained:1 optionally:1 difficult:2 unfortunately:2 robert:1 hog:1 negative:3 reliably:1 boltzmann:2 policy:1 perform:1 allowing:2 negated:2 upper:1 markov:2 datasets:3 benchmark:2 finite:1 descent:12 hinton:1 overcoming:1 introduced:3 semiring:1 specified:1 learned:13 nip:1 able:3 beyond:1 usually:1 pattern:5 kulesza:1 sparsity:1 rf:1 max:11 royal:1 explanation:1 belief:1 power:4 overlap:1 difficulty:1 natural:1 synchronize:1 regularized:1 indicator:20 abbreviate:1 mn:2 representing:1 pedrod:1 carried:1 extract:1 auto:1 philadelphia:1 sn:3 formerly:1 prior:1 review:1 circled:1 traveled:1 l2:1 epoch:1 discriminatively:11 acyclic:1 versus:1 validation:3 foundation:2 h2:1 sufficient:1 consistent:4 rubin:1 editor:1 intractability:1 translation:1 prone:1 compatible:2 summary:1 repeat:1 last:1 allow:1 perceptron:2 felzenszwalb:2 fifth:1 determinism:1 sparse:3 boundary:1 overcome:1 dimension:3 curve:1 world:1 evaluating:1 rich:1 computes:2 valid:6 forward:1 commonly:2 ignores:1 author:1 sj:3 approximate:5 ml:1 discriminative:27 xi:8 continuous:2 latent:1 sk:4 why:1 table:6 learn:5 zk:3 ignoring:1 contributes:1 necessarily:1 vj:2 official:1 dense:1 child:22 x1:31 body:1 site:1 inferring:2 position:2 pereira:2 exponential:3 lie:1 third:1 learns:5 theorem:1 down:2 specific:2 sift:1 experimented:2 svm:4 evidence:8 stl:7 intractable:2 normalizing:1 quantization:2 false:1 adding:2 effectively:1 importance:1 ci:10 felzenswalb:1 mcrbm:2 margin:1 depicted:1 led:1 explore:1 univariate:1 learnspn:1 visual:1 highlighting:1 expressed:1 contained:1 bo:4 pedro:1 ch:7 williamstown:1 acm:2 ma:2 conditional:12 viewed:3 goal:1 replace:2 feasible:1 hard:19 included:1 infinite:1 except:1 total:1 called:1 pas:1 partly:1 internal:1 support:1 scp:4 latter:3 goldszmidt:1 collins:1 incorporate:1 |
3,885 | 4,517 | Meta-Gaussian Information Bottleneck
M?elanie Rey
Department of Mathematics and Computer Science
University of Basel
[email protected]
Volker Roth
Department of Mathematics and Computer Science
University of Basel
[email protected]
Abstract
We present a reformulation of the information bottleneck (IB) problem in terms
of copula, using the equivalence between mutual information and negative copula entropy. Focusing on the Gaussian copula we extend the analytical IB solution available for the multivariate Gaussian case to distributions with a Gaussian
dependence structure but arbitrary marginal densities, also called meta-Gaussian
distributions. This opens new possibles applications of IB to continuous data and
provides a solution more robust to outliers.
1
Introduction
The information bottleneck method (IB) [1] considers the concept of relevant information in the
data compression problem, and takes a new perspective to signal compression which was classically
treated using rate distortion theory. The IB method formalizes the idea of relevance, or meaningful information, by introducing a relevance variable Y . The problem is then to obtain an optimal
compression T of the data X which preserves a maximum of information about Y . Although the
IB method beautifully formalizes the compression problem under relevance constraints, the practical solution of this problem remains difficult, particularly in high dimensions, since the mutual
informations I(X; T ), I(Y ; T ) must be estimated. The IB optimization problem has no available
analytical solution in the general case. It can be solved iteratively using the generalized BlahutArimoto algorithm which, however, requires us to estimate the joint distribution of the potentially
high-dimensional variables X and Y . A formal analysis of the difficulties of this estimation problem
was conducted in [2]. In the continuous case, estimation of multivariate densities becomes arduous
and can be a major impediment to the practical application of IB. A notable exception is the case of
joint Gaussian (X, Y ) for which an analytical solution for the optimal representation T exists [3].
The optimal T is jointly Gaussian with (X, Y ) [4] and takes the form of a noisy linear projection
to eigenvectors of the normalised conditional covariance matrix. The existence of an analytical solution opens new application possibilities and IB becomes practically feasible in higher dimensions
[5]. Finding closed form solutions for other continuous distribution families remains an open challenge. The practical usefulness of the Gaussian IB (GIB), on the other hand, suffers from its missing
flexibility and the statistical problem of finding a robust estimate of the joint covariance matrix of
(X, Y ) in high-dimensional spaces.
Compression and relevance in IB are defined in terms of mutual information (MI) of two random
vectors V and W , which is defined as the reduction in the entropy of V by the conditional entropy
of V given W . MI bears an interesting relationship to copulas: mutual information equals negative
copula entropy [6]. This relation between two seemingly unrelated concepts might appear surpris1
ing, but it directly follows from the definition of a copula as the object that captures the ?pure?
dependency structure of random variables [7]: a multivariate distribution consists of univariate random variables related to each other by a dependence mechanism, and copulas provide a framework
to separate the dependence structure from the marginal distributions. In this work we reformulate
the IB problem for the continuous variables in terms of copulas and enlighten that IB is completely
independent of the marginal distributions of X, Y . The IB problem in the continuous case is in fact
to find the optimal copula (or dependence structure) of T and X, knowing the copula of X and the
relevance variable Y . We focus on the case of Gaussian copula and on the consequences of the
IB reformulation for the Gaussian IB. We show that the analytical solution available for GIB can
naturally be extended to multivariate distributions with Gaussian copula and arbitrary marginal densities, also called meta-Gaussian densities. Moreover, we show that the GIB solution depends only a
correlation matrix, and not on the variance. This allows us to use robust rank correlation estimators
instead of unstable covariance estimators, and gives a robust version of GIB.
2
2.1
Information Bottleneck and Gaussian IB
General Information Bottleneck.
Consider two random variables X and Y with values in the measurable spaces X and Y. Their
joint distribution pXY (x, y) will also be denoted p(x, y) for simplicity. We construct a compressed
representation T of X that is most informative about Y by solving the following variational problem:
min L | L ? I(X; T ) ? ?I(T ; Y ),
(1)
p(t|x)
where the Lagrange parameter ? > 0 determines the trade-off between compression of X and
preservation of information about Y . Since the compressed representation is conditionally independent of Y given X as illustrated in Figure 1, to fully characterize T we only need to specify its joint
distribution with X, i.e. p(x, t). No analytical solution is available for the general problem defined
by (1) and this joint distribution must be calculated with an iterative procedure. In the case of discrete variables X and Y , p(x, t) is obtained iteratively by self-consistent determination of p(t|x),
p(t) and p(y|t) in the generalized Blahut-Arimoto algorithm. The resulting discrete T then defines
(soft) clusters of X. In the case of continuous X and Y , the same set of self-consistent equations
for p(t|x), p(t) and p(y|t) are obtained. These equations also translate into two coupled eigenvector
problems for ? log p(x|t)/?t and ? log p(y|t)/?t, but a direct solution of these problems is very
difficult in practice. However, when X and Y are jointly multivariate Gaussian distributed, this
problem becomes analytically tractable.
Figure 1: Graphical representation of the conditional independence structure of IB.
2.2
Gaussian IB.
Consider two joint Gaussian random vectors (rv) X and Y with zero mean:
(X, Y ) ? N
0p+q , ? =
?x
?xy
?Txy
?y
,
(2)
where p is the dimension of X, q is the dimension of Y and 0p+q is the zero vector of dimension
p + q. In [4] it is proved that the optimal compression T is also jointly Gaussian with X and Y . This
implies that T can be expressed as a noisy linear transformation of X:
T = AX + ?,
2
(3)
where ? ? N (0p , ?? ) is independent of X and A ? Rp?p . The minimization problem (1) is then
reduced to solving:
min L|L ? I(X; T ) ? ?I(T ; Y ).
(4)
A,??
For a given trade-off parameter ?, the optimal compression is given by T ? N (0p , ?t ) with ?t =
A?x AT + ?? and the noise variance can be fixed to the identity matrix ?? = Ip , as shown in [3].
The transformation matrix A is given by:
T
?
?
T
0 ? ? ? ?1c
0T ; .T. . ; 0 T
c
c ?
?
? ?1 v1 ; 0 ; . . . , 0
?1c ? ? ? ?2c ?
A = ? ?1 v T ; ?2 v T ; 0T ; . . . ; 0T
(5)
?2 ? ? ? ?3 ?
1
2
?
?
..
.
where v1T , . . . , vpT are left eigenvectors of ?x|y ??1
x sorted by their corresponding increasing eigenc
values ?1 ,q
. . . , ?p . The critical ? values are ?i = (1 ? ?i )?1 , and the ?i coefficients are defined
?(1??i )?1
with ri = viT ?x vi . In the above, 0T is a p-dimensional row vector and
by ?i =
?i ri
semicolons separate rows of A. We can see from equation (5) that the optimal projection of X is a
combination of weighted eigenvectors of ?x|y ??1
x . The number of selected eigenvectors, and thus
the effective dimension of T , depends on the parameter ?.
3
3.1
Copula and Information Bottleneck
Copula and Gaussian copula.
A multivariate distribution consists of univariate random variables related to each other by a dependence mechanism. Copulas provide a framework to separate the dependence structure from
the marginal distributions. Formally, a d-dimensional copula is a multivariate distribution function
C : [0, 1]d ? [0, 1] with standard uniform margins. Sklar?s theorem [7] states the relationship between copulas and multivariate distributions. Any joint distribution function F can be represented
using its marginal univariate distribution functions and a copula:
F (z1 , . . . , zd ) = C (F1 (z1 ) , . . . , Fd (zd )) .
(6)
If the margins are continuous, then this copula is unique. Conversely, if C is a copula and F1 , . . . , Fd
are univariate distribution functions, then F defined as in (6) is a valid multivariate distribution
function with margins F1 , . . . , Fd . Assuming that C has d-th order partial derivatives we can define
1 ,...,ud )
the copula density function: c(u1 , . . . , ud ) = ?C(u
?u1 ...?ud , u1 , . . . , ud ? [0, 1], The density corresponding to (6) can then be rewritten as a product of the marginal densities and the copula density
Qd
function: f (z1 , . . . , zd ) = c (F1 (z1 ), . . . , Fd (zd )) j=1 fj (zj ).
Gaussian copulas constitute an important class of copulas. If F is a Gaussian distribution Nd (?, ?)
then the corresponding C fulfilling equation (6) is a Gaussian copula. Due to basic invariance
properties (cf. [8]), the copula of Nd (?, ?) is the same as the copula of Nd (0, P ), where P is the
correlation matrix corresponding to the covariance matrix ?. Thus a Gaussian copula is uniquely
determined by a correlation matrix P and we denote a Gaussian copula by CP . Using equation
(6) with CP , we can construct multivariate distributions with arbitrary margins and a Gaussian
dependence structure. These distributions are called meta-Gaussian distributions. Gaussian copulas
conveniently have a copula density function:
1
1
(7)
cP (u) = |P |? 2 exp ? ??1 (u)T (P ?1 ? I)??1 (u) ,
2
where ??1 (u) is a short notation for the univariate Gaussian quantile function applied to each component ??1 (u) = (??1 (u1 ), . . . , ??1 (ud )).
3.2
Copula formulation of IB.
At the heart of the copula formulation of IB is the following identity: for a continuous random vector
Z = (Z1 , . . . , Zd ) with density f (z) and copula density cZ (u) the multivariate mutual information
3
or multi-information is the negative differential entropy of the copula density:
Z
I(Z) ? Dkl (f (z) k f0 (z)) =
cZ (u) log cZ (u)du = ?H(cZ ),
(8)
[0,1]d
where u = (u1 , . . . , ud ) ? [0, 1]d , Dkl denotes the Kullback-Leibler divergence, and f0 (z) =
f1 (z1 )f2 (z2 ) . . . fd (zd ). For continuous multivariate X, Y and T , equation (8) implies that:
I(X; T ) = Dkl (f (x, t) k f0 (x, t)) ? Dkl (f (x)||f0 (x)) ? Dkl (f (t)||f0 (t)),
= ?H(cXT ) + H(cX ) + H(cT ),
I(Y ; T ) = ?H(cY T ) + H(cY ) + H(cT ),
where cXT is the copula density of the vector (X1 , . . . , Xp , T1 , . . . , Tp ). The above derivation then
leads to the following proposition.
Proposition 3.1. Copula formulation of IB
The Information Bottleneck minimization problem (1) can be reformulated as:
min L | L = ?H(cXT ) + H(cX ) + H(cT ) ? ?{?H(cY T ) + H(cY ) + H(cT )}.
cXT
(9)
The minimization problem defined in (1) is solved under the assumption that the joint distribution
of (X, Y ) is known, this now translates in the assumption that the copula copula density cXY (and
thus cX ) is assumed to be known. The density cT is entirely determined by cXT , and using the
conditional independence structure it is clear that cY T is also determined by cXT when cXY is
known. Since the joint density of (X, Y, T ) decomposes as:
f (x, y, t) = f (t, y|x)f (x) = f (t|x)f (y|x)f (x),
(10)
the corresponding copula density then also decomposes as:
cXY T (ux , uy , ut ) = RT |X (ux , ut )RY |X (ux , uy )cX (ux ),
(11)
where
RT |X (ux , ut ) =
cXT (ux , ut )
, ux ? [0, 1]p , uy ? [0, 1]q , ut ? [0, 1]p ,
cX (ux )
as shown in [9]. We can finally rewrite the copula density of (Y, T ) as:
Z
Z
cXT (ux , ut )cXY (ux , uy )
dux .
cY T (uy , ut ) = cXY T (ux , uy , ut )dux =
cX (ux )
(12)
(13)
The IB optimization problem actually reduces to finding an optimal copula density cXT . This implies that in order to construct the compression variable T , the only relevant aspect is the copula
dependence structure between X, T and Y .
4
4.1
Meta-Gaussian IB
Meta-Gaussian IB formulation.
The above reformulation of IB is of great practical interest when we focus on the special case of
the Gaussian copula. The only known case for which a simple analytical solution to the IB problem
exists is when (X, Y ) are joint Gaussians. Equation (9) shows that actually an optimal solution
does not depend of the margins but only on the copula density cXY . From this observation the idea
naturally follows that an analytical solution should also exist for any joint distribution of (X, Y )
which has a Gaussian copula, and that regardless of its margins. We show below in proposition 4.1
? and Y? is used to represent the normal scores:
that this is indeed the case. The notation X
? = (??1 ? FX (X1 ), . . . , ??1 ? FX (Xp )).
X
1
p
(14)
Since copulas are invariant to strictly increasing transformations the normal scores have the same
copulas as the original variables X and Y .
4
Proposition 4.1. Optimality of meta-Gaussian IB
Consider rv X, Y with a Gaussian dependence structure and arbitrary margins:
FX,Y (x, y) ? CP (FX1 (x1 ), . . . , FXp (xp ), FY1 (y1 ), . . . , FYq (yq )),
(15)
where FXi , FYi are the marginal distributions of X, Y and CP is a Gaussian copula parametrized by
a correlation matrix P . Then the optimum of the minimization problem (1) is obtained for T ? T ,
where T is the set of all rv T such that (X, Y, T ) has a Gaussian copula and T has Gaussian
margins.
Before proving proposition 4.1 we give a short lemma.
? Y? , T ) are jointly Gaussian.
Lemma 4.1. T ? T ? (X,
Proof.
? Y? , T ) also
1. If T ? T then (X, Y, T ) has a Gaussian copula which implies that (X,
? Y? , T all have normally distributed margins it follows that
has a Gaussian copula. Since X,
? Y? , T ) has a joint Gaussian distribution.
(X,
? Y? , T ) are jointly Gaussian then (X,
? Y? , T ) has a Gaussian copula which implies
2. If (X,
that (X, Y, T ) has again a Gaussian copula. Since T has normally distributed margins, it
follows that T ? T .
Proposition 4.1 can now be proven by contradiction.
Proof of proposition 4.1. Assume there exists T ? ?
/ T such that:
L(X, Y, T ? ) := I(X; T ? ) ? ?I(Y ; T ? ) <
min
p(t|x),T ?T
I(X; T ) ? ?I(T ; Y )
(16)
? Y? , T ) has the same copula as (X, Y, T ), we have that I(X;
? T ) = I(X; T ) and I(Y? ; T ) =
Since (X,
I(Y ; T ). Using Lemma 4.1 the right hand part of inequality (16) can be rewritten as :
min
p(t|x),T ?T
L(X, Y, T ) =
min
p(t|x),T ?T
? Y? , T ) =
L(X,
min
? Y? ,T )?N
p(t|?
x),(X,
? Y? , T ).
L(X,
(17)
Combining equations (16) and (17) we obtain:
? T ? ) ? ?I(Y? ; T ? ) <
I(X;
min
? Y? ,T )?N
p(t|?
x),(X,
? T ) ? ?I(T ; Y? ).
I(X;
This is in contradiction with the optimality of Gaussian information bottleneck, which states that the
optimal T is jointly Gaussian with (X, Y ). Thus the optimum for meta-Gaussian (X, Y ) is attained
for T with normal margins such that (X, Y, T ) also is meta-Gaussian.
? Y? ) is also optimal for (X, Y ).
Corollary 4.1. The optimal projection T o obtained for (X,
Proof. By the above we know that an optimal compression for (X, Y ) can be obtained in the set of
? Y? , T ) is jointly Gaussian, since L? = L it is clear that T o is also optimal
variables T such that (X,
for (X, Y ).
As a consequence of Proposition 4.1, for any random vector (X, Y ) having a Gaussian copula dependence structure, an optimal projection T can be obtained by first calculating the vector of the normal
? Y? ) and then computing T = AX
? + ?. A is here entirely determined by the covariance
scores (X,
?
?
matrix of the vector (X, Y ) which also equals its correlation matrix (the normal scores have unit
variance by definition), and thus the correlation matrix P parametrizing the Gaussian copula CP . In
practice the problem is reduced to the estimation the Gaussian copula of (X, Y ). In particular, for
the traditional Gaussian case where (X, Y ) ? N (0, ?), this means that we actually do not need to
estimate the full covariance ? but only the correlations.
5
4.2
Meta-Gaussian mutual information.
The multi-information for a meta-Gaussian random vector Z = (Z1 , . . . , Zd ) with copula CPz is:
? = ? 1 log |cov(Z)|
? = ? 1 log |?z?| = ? 1 log |corr(Z)|
? = ? 1 log |Pz |,
I(Z) = I(Z)
2
2
2
2
(18)
where |.| denotes the determinant. A direct derivation of the multi-information for meta-Gaussian
random variables is also given in the supplementary material. The mutual information
between
P
P
x
yx
1
1
1
X and Y is then I(X; Y ) = ? 2 log |P |+ 2 log |Px |+ 2 log |Py |, where P =
. It
Pxy Py
is obvious that the formula for the meta-Gaussian is similar to the formula for the Gaussian case
IGauss (X; Y ) = ? 12 log |?|+ 12 log |?x |+ 21 log |?y |, but uses the correlation matrix parametrizing
the copula instead of the data covariance matrix. The two formulas are equivalent when X, Y are
jointly Gaussian.
4.3
Semi-parametric copula estimation.
Semi-parametric copula estimation has been studied in [10], [11] and [12]. The main idea is to
combine non-parametric estimation of the margins with a parametric copula model, in our case the
Gaussian copulas family. If the margins F1 , . . . , Fd of a random vector Z are known, P can be
estimated by the matrix P? with elements given by:
Pn
1
?1
(Fk (zik ))??1 (Fl (zil ))
i=1 ?
n
P?(k,l) = h
(19)
i1/2 ,
Pn
2 1 Pn
2
1
?1
?1
(Fk (zik ))] n i=1 [? (Fl (zil ))]
i=1 [?
n
where zik denotes the i-th observation of dimension k. P? is assured to be positive semi-definite. If
the margins are unknown we can instead use the rescaled empirical cumulative distributions:
!
n
n
1X
?
Iz ?t .
(20)
Fj (t) =
n + 1 n i=1 ij
The estimator resulting from using the rescaled empirical distributions (20) in equation (19) is given
in the following definition.
Definition 4.1 (Normal scores rank correlation coefficient). The normal scores rank correlation
coefficient is the matrix P? n with elements:
Pn
il )
?1 R(zik )
( n+1 )??1 ( R(z
i=1 ?
n+1 )
n
?
P(k,l) =
,
(21)
2
Pn
?1 ( i )
?
i=1
n+1
where R(zik ) denotes the rank of the i-th observation for dimension k. Robustness properties of
the estimator (21) have been studied in [13]. Using (21) we compute an estimate of the correlation
matrix P parametrizing cXY and obtain the transformation matrix A as detailed in Algorithm 1.
Algorithm 1 Construction of the transformation matrix A
1. Compute the normal scores rank correlation estimate P? n of the correlation matrix P
parametrizing cXY :
for k, l = 1, . . . , p + q do
Pn
R(zik )
R(zil )
??1 (
)??1 ( n+1
)
Set the (k, l)-th element of P? n to i=1 Pn n+1
as in equation (21) and where
2
i
?1
( n+1 ))
i=1 (?
the i-th row of z is the concatenation of the i-th rows of x and y: zi? = (xi? , yi? ) ? Rp+q .
end for
? x?|?y = P?xn ?
2. Compute the estimated conditional covariance matrix of the normal scores: ?
n ? n ?1 ? n
?
Pxy (Py ) Pyx .
? x?|?y (P?xn )?1 .
3. Find the eigenvectors and eigenvalues of ?
4. Construct the transformation matrix A as in equation (5).
6
5
5.1
Results
Simulations
We tested meta-Gaussian IB (MGIB) in two different setting, first when the data is Gaussian but contains outliers, second when the data has a Gaussian copula but non-Gaussian margins. We generated
a training sample with n = 1000 observations of X and Y with dimensions fixed to dx = 15 and
dy = 15. A covariance matrix was drawn from a Wishart distribution centered at a correlation matrix
populated with a few high correlation values to ensure some dependency between X and Y . This
matrix was then scaled to obtain the correlation matrix parametrizing the copula. In the first setting
the data was sampled with N (0, 1) margins. A fixed percentage of outliers, 8%, was then introduced
to the sample by randomly drawing a row and a column in the data matrix and replacing the current
value with a random draw from the set [?6, ?3] ? [3, 6]. In the second setting data points were
drawn from meta-Gaussian distributions with three different type of margins: Student with df = 4,
exponential with ? = 1, and beta with ?1 = 0.5 = ?2 . For each training sample two projection
? n and
matrices AG and AC were computed, AG was calculated based on the sample covariance ?
n
AC was obtained using the normal scores rank correlation P? . The compression quality of the projection was then tested on a test sample of n = 100 000 observations generated independently from
the same distribution (without outliers). Each experiment was repeated 50 times. Figure 2 shows the
information curves obtained by varying ? from 0.1 to 200. The mutual informations I(X; T ) and
(Y ; T ) can be reliably estimated on the test sample using (18) and (21). The information curves start
with a very steep slope, meaning that a small increase in I(X; T ) leads to a significant increase in
I(Y ; T ), and then slowly saturate to reach their asymptotic limit in I(Y ; T ). The best information
curves are situated in the upper left corner of the figure, since for a fixed compression value I(X; T )
we want to achieve the highest relevant information content (I; T ). We clearly see in Figure 2 that
MGIB consistently outperforms GIB in that it achieves higher compression rates.
Student margins
I(Y;T)
2
2
4
4
6
6
I(Y;T)
8
8
10
10
12
12
14
Gaussian with outliers
10
20
MGIB
GIB
0
0
MGIB
GIB
30
0
10
20
I(X;T)
Beta margins
30
40
12
10
8
I(Y;T)
6
4
2
2
4
6
I(Y;T)
8
10
12
14
I(X;T)
Exponential margins
14
0
10
20
30
MGIB
GIB
0
0
MGIB
GIB
0
40
0
I(X;T)
10
20
30
40
I(X;T)
Figure 2: Information curves for Gaussian data with outliers, data with Student, Exponential and
Beta margins. Each panel shows 50 curves obtained for repetitions of the MGIB (red) and the GIB
(black). The curves stop when they come close to saturation. For higher values of ? the information
I(X; T ) would continue to grow while I(Y ; T ) would reach its limit leading to horizontal lines,
but such high beta values lead to numerical instability. Since GIB suffers from a model mismatch
problem when the margins are not Gaussian, the curves saturate for smaller values of I(Y ; T ).
7
5.2
Real data
We further applied MGIB to the Communities and Crime data set from the UCI repository 1 . The
data set contains observations of predictive and target variables. After removing missing values we
retained n = 2195 observations. In a pre-processing step we selected the dx = 10 dimensions
with the strongest absolute rank correlation to one of the relevance variables. Plotting empirical
information curves as in the synthetic examples above was impossible, because even for this setting
with drastically decreased dimensionality all mutual information estimates we tried (including the
nearest-neighbor graph method in [14]) were too unstable to draw empirical information curves. To
still give a graphical representation of our results we show in Figure 3 non-parametric density estimates of the one dimensional compression T split in 5 groups according to corresponding values of
the first relevance variable. We used GIB, MGIB and Principal Component analysis (PCA) to reduce
X to a 1-dimensional variable. For PCA this is the first principal component, for GIB and MGIB
we independently selected the highest value of ? leading to a 1-dimensional compression. It is obvious from Figure 3 that the one-dimensional MGIB compression nicely separates the different target
classes, whereas the GIB and PCA projections seem to contain much less information about the
target variable. We conclude that similar to our synthetic examples above, the MGIB compression
contains more information about the relevance variable than GIB at the same compression rate.
Meta?Gaussian IB
Gaussian IB
PCA
Y1 in (?3.5,0)
Y1 in (0,0.5)
Y1 in (0.5,1)
Y1 in (1,1.5)
Y1 in (1.5,3.5)
First component of compression T
First component of compression T
first PCA projection
Figure 3: Parzen density estimates of the univariate projection of X split in 5 groups according to
values of the first relevance variable. We see more separation between groups for MGIB than for
GIB or PCA, which indicates that the projection is more informative about the relevance variable.
6
Conclusion
We present a reformulation of the IB problem in terms of copula which gives new insights into data
compression with relevance constraints and opens new possible applications of IB for continuous
multivariate data. Meta-Gaussian IB naturally extends the analytical solution of Gaussian IB to
multivariate distributions with Gaussian copula and arbitrary marginal density. It can be applied
to any type of continuous data, provided the assumption of a Gaussian dependence structure is
reasonable, in which case the optimal compression can easily be obtained by semi-parametric copula
estimation. Simulated experiments showed that MGIB clearly outperforms GIB when the marginal
densities are not Gaussian, and even in the Gaussian case with a tiny amount of outliers MGIB has
been shown to significantly benefit from the robustness properties of rank estimators. In future work,
it would be interesting to see if the copula formulation of IB admits analytical solutions for other
copula families.
Acknowledgments
M. Rey is partially supported by the Swiss National Science Foundation, grant CR32I2 127017 / 1.
1
http://archive.ics.uci.edu/ml/
8
References
[1] N. Tishby, F.C. Pereira, and W. Bialek. The information bottleneck method. The 37th annual Allerton
Conference on Communication, Control, and Computing, (29-30):368?377, 1999.
[2] O. Shamir, S. Sabato, and N. Tishby. Learning and generalization with the information bottleneck. Theor.
Comput. Sci., 411(29-30):2696?2711, 2010.
[3] G. Chechik, A. Globerson, N. Tishby, and Y. Weiss. Information bottleneck for Gaussian variables.
Journal of Machine Learning Research, 6:165?188, 2005.
[4] A. Globerson and N. Tishby. On the optimality of the Gaussian information bottleneck curve. Hebrew
University Technical Report, 2004.
[5] R.M. Hecht, E. Noor, and N. Tishby. Speaker recognition by Gaussian information bottleneck. INTERSPEECH, pages 1567?1570, 2009.
[6] J. Ma and Z. Sun. Mutual information is copula entropy. arXiv:0808.0845v1, 2008.
[7] A. Sklar. Fonctions de r?epartition a` n dimensions et leurs marges. Publications de l?Institut de Statistique
de l?Universit?e de Paris, 8:229?231, 1959.
[8] A. J. McNeil, R. Frey, and P. Embrechts. Quantitative Risk Management. Princeton Series in Finance.
Princeton University Press, 2005.
[9] G. Elidan. Copula bayesian networks. Proceedings of the Neural Information Processing Systems (NIPS),
2010.
[10] C. Genest, K. Ghoudhi, and L.P. Rivet. A semiparametric estimation procedure of dependence parameters
in multivariate families of distributions. Biometrika, 82(3):543?552, 1995.
[11] H. Tsukahara. Semiparametric estimation in copula models.
33(3):357?375, 2005.
The Canadian Journal of Statistics,
[12] Peter D. Hoff. Extending the rank likelihood for semiparametric copula estimation. Annals of Applied
Statistics, 1(1):273, 2007.
[13] K. Boudt, J. Cornelissen, and C. Croux. The gaussian rank correlation estimator: Robustness properties.
Statistics and Computing, 22:471?483, 2012.
[14] D. P?al, B. P?oczos, and C. Szepesv?ari. Estimation of R?enyi entropy and mutual information based on
generalized nearest-neighbor graphs. Proceedings of the Neural Information Processing Systems (NIPS),
2010.
9
| 4517 |@word determinant:1 version:1 repository:1 compression:22 nd:3 open:4 simulation:1 tried:1 covariance:10 reduction:1 contains:3 score:9 series:1 outperforms:2 unibas:2 current:1 z2:1 dx:2 must:2 numerical:1 informative:2 zik:6 selected:3 fx1:1 short:2 provides:1 allerton:1 direct:2 differential:1 beta:4 consists:2 combine:1 indeed:1 multi:3 ry:1 v1t:1 dux:2 increasing:2 becomes:3 provided:1 unrelated:1 moreover:1 notation:2 panel:1 eigenvector:1 finding:3 transformation:6 ag:2 formalizes:2 quantitative:1 finance:1 universit:1 scaled:1 biometrika:1 control:1 normally:2 unit:1 grant:1 appear:1 t1:1 before:1 positive:1 frey:1 limit:2 consequence:2 might:1 black:1 studied:2 equivalence:1 conversely:1 uy:6 practical:4 unique:1 acknowledgment:1 globerson:2 practice:2 definite:1 swiss:1 procedure:2 empirical:4 significantly:1 projection:10 chechik:1 pre:1 statistique:1 close:1 risk:1 impossible:1 instability:1 py:3 measurable:1 equivalent:1 roth:2 missing:2 regardless:1 txy:1 vit:1 independently:2 simplicity:1 pure:1 contradiction:2 estimator:6 insight:1 proving:1 fx:3 annals:1 construction:1 target:3 shamir:1 us:1 fyi:1 element:3 recognition:1 particularly:1 solved:2 capture:1 cy:6 sun:1 trade:2 rescaled:2 highest:2 depend:1 solving:2 rewrite:1 predictive:1 f2:1 completely:1 easily:1 joint:13 represented:1 sklar:2 derivation:2 enyi:1 effective:1 supplementary:1 distortion:1 vpt:1 drawing:1 compressed:2 tested:2 cov:1 statistic:3 jointly:8 noisy:2 ip:1 seemingly:1 eigenvalue:1 analytical:10 product:1 relevant:3 combining:1 uci:2 translate:1 flexibility:1 achieve:1 cluster:1 optimum:2 extending:1 object:1 ac:2 nearest:2 ij:1 gib:17 implies:5 come:1 qd:1 centered:1 material:1 f1:6 generalization:1 proposition:8 theor:1 strictly:1 practically:1 ic:1 normal:10 exp:1 great:1 major:1 achieves:1 estimation:11 repetition:1 weighted:1 minimization:4 clearly:2 gaussian:79 pn:7 volker:2 varying:1 publication:1 corollary:1 ax:2 focus:2 consistently:1 rank:10 indicates:1 likelihood:1 relation:1 i1:1 denoted:1 special:1 copula:76 mutual:11 marginal:10 equal:2 construct:4 hoff:1 having:1 nicely:1 future:1 report:1 few:1 randomly:1 preserve:1 divergence:1 national:1 blahut:1 interest:1 fd:6 possibility:1 partial:1 xy:1 institut:1 column:1 soft:1 tp:1 introducing:1 leurs:1 uniform:1 usefulness:1 conducted:1 too:1 tishby:5 characterize:1 dependency:2 synthetic:2 density:24 off:2 parzen:1 again:1 management:1 slowly:1 classically:1 wishart:1 corner:1 cornelissen:1 derivative:1 leading:2 de:5 student:3 coefficient:3 notable:1 depends:2 vi:1 closed:1 red:1 start:1 slope:1 cxt:9 il:1 cxy:8 variance:3 bayesian:1 strongest:1 reach:2 suffers:2 definition:4 obvious:2 naturally:3 proof:3 mi:2 sampled:1 stop:1 proved:1 ut:8 dimensionality:1 actually:3 focusing:1 higher:3 attained:1 specify:1 wei:1 formulation:5 correlation:20 hand:2 horizontal:1 replacing:1 defines:1 quality:1 arduous:1 concept:2 contain:1 analytically:1 iteratively:2 leibler:1 illustrated:1 conditionally:1 self:2 uniquely:1 interspeech:1 speaker:1 generalized:3 cp:6 fj:2 meaning:1 variational:1 ari:1 cpz:1 arimoto:1 extend:1 significant:1 zil:3 fonctions:1 fk:2 mathematics:2 populated:1 f0:5 multivariate:15 showed:1 perspective:1 meta:17 inequality:1 continue:1 oczos:1 yi:1 ud:6 elidan:1 signal:1 semi:4 preservation:1 rv:3 full:1 reduces:1 ing:1 technical:1 determination:1 dkl:5 hecht:1 basic:1 df:1 arxiv:1 represent:1 cz:4 whereas:1 want:1 semiparametric:3 szepesv:1 decreased:1 grow:1 sabato:1 archive:1 seem:1 canadian:1 split:2 independence:2 zi:1 impediment:1 reduce:1 idea:3 knowing:1 translates:1 bottleneck:13 pca:6 peter:1 reformulated:1 rey:3 constitute:1 clear:2 eigenvectors:5 detailed:1 amount:1 situated:1 reduced:2 http:1 exist:1 percentage:1 zj:1 estimated:4 zd:7 discrete:2 iz:1 group:3 reformulation:4 drawn:2 v1:2 graph:2 mcneil:1 extends:1 family:4 reasonable:1 separation:1 draw:2 dy:1 entirely:2 fl:2 ct:5 pxy:3 croux:1 annual:1 constraint:2 ri:2 semicolon:1 u1:5 aspect:1 min:8 optimality:3 px:1 department:2 according:2 combination:1 smaller:1 outlier:7 invariant:1 fulfilling:1 heart:1 equation:11 remains:2 mechanism:2 know:1 tractable:1 end:1 available:4 gaussians:1 rewritten:2 fxi:1 robustness:3 rp:2 existence:1 original:1 denotes:4 cf:1 ensure:1 graphical:2 yx:1 calculating:1 quantile:1 parametric:6 dependence:12 rt:2 traditional:1 bialek:1 beautifully:1 separate:4 simulated:1 concatenation:1 parametrized:1 sci:1 epartition:1 considers:1 unstable:2 assuming:1 retained:1 relationship:2 reformulate:1 hebrew:1 difficult:2 steep:1 potentially:1 negative:3 reliably:1 basel:2 unknown:1 upper:1 observation:7 parametrizing:5 extended:1 communication:1 y1:6 arbitrary:5 community:1 introduced:1 paris:1 z1:7 crime:1 nip:2 below:1 mismatch:1 challenge:1 saturation:1 including:1 critical:1 treated:1 difficulty:1 melanie:1 yq:1 coupled:1 asymptotic:1 fully:1 bear:1 interesting:2 proven:1 foundation:1 consistent:2 xp:3 plotting:1 tiny:1 row:5 supported:1 drastically:1 formal:1 normalised:1 neighbor:2 absolute:1 distributed:3 benefit:1 curve:10 dimension:11 calculated:2 valid:1 cumulative:1 xn:2 kullback:1 ml:1 assumed:1 conclude:1 xi:1 continuous:11 iterative:1 decomposes:2 robust:4 pyx:1 genest:1 du:1 assured:1 main:1 noise:1 repeated:1 x1:3 pereira:1 exponential:3 comput:1 ib:36 theorem:1 formula:3 saturate:2 removing:1 pz:1 admits:1 exists:3 corr:1 margin:22 entropy:7 cx:6 univariate:6 conveniently:1 lagrange:1 expressed:1 ux:12 partially:1 ch:2 determines:1 ma:1 conditional:5 identity:2 sorted:1 feasible:1 content:1 determined:4 lemma:3 principal:2 called:3 invariance:1 meaningful:1 exception:1 formally:1 embrechts:1 relevance:11 princeton:2 marge:1 |
3,886 | 4,518 | Factoring nonnegative matrices with linear programs
Victor Bittorf
[email protected]
Benjamin Recht
[email protected]
Computer Sciences
University of Wisconsin
Christopher R?e
[email protected]
Joel A. Tropp
Computing and Mathematical Sciences
California Institute of Technology
[email protected]
Abstract
This paper describes a new approach, based on linear programming, for computing nonnegative matrix factorizations (NMFs). The key idea is a data-driven
model for the factorization where the most salient features in the data are used to
express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C that satisfies X ? CX and some linear constraints.
The constraints are chosen to ensure that the matrix C selects features; these features can then be used to find a low-rank NMF of X. A theoretical analysis
demonstrates that this approach has guarantees similar to those of the recent NMF
algorithm of Arora et al. (2012). In contrast with this earlier work, the proposed
method extends to more general noise models and leads to efficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the
new approach is also superior in practice. An optimized C++ implementation can
factor a multigigabyte matrix in a matter of minutes.
1
Introduction
Nonnegative matrix factorization (NMF) is a popular approach for selecting features in data [16?18,
23]. Many machine-learning and data-mining software packages (including Matlab [3], R [12], and
Oracle Data Mining [1]) now include heuristic computational methods for NMF. Nevertheless, we
still have limited theoretical understanding of when these heuristics are correct.
The difficulty in developing rigorous methods for NMF stems from the fact that the problem is
computationally challenging. Indeed, Vavasis has shown that NMF is NP-Hard [27]; see [4] for
further worst-case hardness results. As a consequence, we must instate additional assumptions on
the data if we hope to compute nonnegative matrix factorizations in practice.
In this spirit, Arora, Ge, Kannan, and Moitra (AGKM) have exhibited a polynomial-time algorithm
for NMF that is provably correct?provided that the data is drawn from an appropriate model, based
on ideas from [8]. The AGKM result describes one circumstance where we can be sure that NMF
algorithms are capable of producing meaningful answers. This work has the potential to make an
impact in machine learning because proper feature selection is an important preprocessing step for
many other techniques. Even so, the actual impact is damped by the fact that the AGKM algorithm
is too computationally expensive for large-scale problems and is not tolerant to departures from the
modeling assumptions. Thus, for NMF, there remains a gap between the theoretical exercise and the
actual practice of machine learning.
1
The present work presents a scalable, robust algorithm that can successfully solve the NMF problem
under appropriate hypotheses. Our first contribution is a new formulation of the nonnegative feature
selection problem that only requires the solution of a single linear program. Second, we provide
a theoretical analysis of this algorithm. This argument shows that our method succeeds under the
same modeling assumptions as the AGKM algorithm with an additional margin constraint that is
common in machine learning. We prove that if there exists a unique, well-defined model, then we
can recover this model accurately; our error bound improves substantially on the error bound for
the AGKM algorithm in the high SNR regime. One may argue that NMF only ?makes sense? (i.e.,
is well posed) when a unique solution exists, and so we believe our result has independent interest.
Furthermore, our algorithm can be adapted for a wide class of noise models.
In addition to these theoretical contributions, our work also includes a major algorithmic and experimental component. Our formulation of NMF allows us to exploit methods from operations research
and database systems to design solvers that scale to extremely large datasets. We develop an efficient
stochastic gradient descent (SGD) algorithm that is (at least) two orders of magnitude faster than the
approach of AGKM when both are implemented in Matlab. We describe a parallel implementation
of our SGD algorithm that can robustly factor matrices with 105 features and 106 examples in a few
minutes on a multicore workstation.
Our formulation of NMF uses a data-driven modeling approach to simplify the factorization problem. More precisely, we search for a small collection of rows from the data matrix that can be
used to express the other rows. This type of approach appears in a number of other factorization
problems, including rank-revealing QR [15], interpolative decomposition [20], subspace clustering [10, 24], dictionary learning [11], and others. Our computational techniques can be adapted to
address large-scale instances of these problems as well.
2
Separable Nonnegative Matrix Factorizations and Hott Topics
Notation. For a matrix M and indices i and j, we write Mi? for the ith row of M and M?j for the
jth column of M . We write Mij for the (i, j) entry.
Let Y be a nonnegative f ? n data matrix with columns indexing examples and rows indexing
features. Exact NMF seeks a factorization Y = F W where the feature matrix F is f ? r, where
the weight matrix W is r ? n, and both factors are nonnegative. Typically, r min{f, n}.
Unless stated otherwise, we assume that each row of the data matrix Y is normalized so it sums to
one. Under this hypothesis, we may also assume that each row of F and of W also sums to one [4].
It is notoriously difficult to solve the NMF problem. Vavasis showed that it is NP-complete to decide
whether a matrix admits a rank-r nonnegative factorization [27]. AGKM proved that an exact NMF
algorithm can be used to solve 3-SAT in subexponential time [4].
The literature contains some mathematical analysis of NMF that can be used to motivate algorithmic
development. Thomas [25] developed a necessary and sufficient condition for the existence of a
rank-r NMF. More recently, Donoho and Stodden [8] obtained a related sufficient condition for
uniqueness. AGKM exhibited an algorithm that can produce a nonnegative matrix factorization
under a weaker sufficient condition. To state their results, we need a definition.
Definition 2.1 A set of vectors {v1 , . . . , vr } ? Rd is simplicial if no vector vi lies in the convex
hull of {vj : j 6= i}. The set of vectors is ?-robust simplicial if, for each i, the `1 distance from vi
to the convex hull of {vj : j 6= i} is at least ?. Figure 1 illustrates these concepts.
These ideas support the uniqueness results of Donoho and Stodden and the AGKM algorithm. Indeed, we can find an NMF of Y efficiently if Y contains a set of r rows that is simplicial and whose
convex hull contains the remaining rows.
Definition 2.2 An NMF Y = F W is called separable if the rows of W are simplicial and there is
a permutation matrix ? such that
Ir
?F =
.
(1)
M
2
Algorithm 1: AGKM: Approximably Separable
Nonnegative Matrix Factorization [4]
1: Initialize R = ?.
2: Compute the f ? f matrix D with Dij =
3:
4:
5:
6:
7:
8:
9:
10:
1
1
d1
kXi? ? Xj? k1 .
for k = 1, . . . f do
Find the set Nk of rows that are at least
5/? + 2 away from Xk? .
Compute the distance ?k of Xk? from
conv({Xj? : j ? Nk }).
if ?k > 2, add k to the set R.
end for
Cluster the rows in R as follows: j and k are
in the same cluster if Djk ? 10/? + 6.
Choose one element from each cluster to
yield W .
F = arg minZ?Rf ?r kX ? ZW k?,1
d1
2
2
d2
3
Figure 1: Numbered circles are hott topics. Their
convex hull (orange) contains the other topics (small
circles), so the data admits a separable NMF. The arrow d1 marks the `1 distance from hott topic (1) to the
convex hull of the other two hott topics; definitions of
d2 and d3 are similar. The hott topics are ?-robustly
simplicial when each di ? ?.
To compute a separable factorization of Y , we must first identify a simplicial set of rows from Y .
Afterward, we compute weights that express the remaining rows as convex combinations of this
distinguished set. We call the simplicial rows hott and the corresponding features hott topics.
This model allows us to express all the features for a particular instance if we know the values of
the instance at the simplicial rows. This assumption can be justified in a variety of applications. For
example, in text, knowledge of a few keywords may be sufficient to reconstruct counts of the other
words in a document. In vision, localized features can be used to predict gestures. In audio data, a
few bins of the spectrogram may allow us to reconstruct the remaining bins.
While a nonnegative matrix one encounters in practice might not admit a separable factorization, it
may be well-approximated by a nonnnegative matrix with separable factorization. AGKM derived an
algorithm for nonnegative matrix factorization of a matrix that is well-approximated by a separable
factorization. To state their result, we introduce a norm on f ? n matrices:
k?k?,1 := max
1?i?f
n
X
|?ij | .
j=1
2
?
Theorem 2.3 (AGKM [4]) Let and ? be nonnegative constants satisfying ? 20+13?
. Let X be
a nonnegative data matrix. Assume X = Y + ? where Y is a nonnegative matrix whose rows
have unit `1 norm, where Y = F W is a rank-r separable factorization in which the rows of W
are ?-robust simplicial, and where k?k?,1 ? . Then Algorithm 1 finds a rank-r nonnegative
? that satisfies the error bound
?
factorization F? W
? 10/? + 7.
X ? F? W
?,1
In particular, the AGKM algorithm computes the factorization exactly when = 0. Although
this method is guaranteed to run in polynomial time, it has many undesirable features. First, the
algorithm requires a priori knowledge of the parameters ? and . It may be possible to calculate
, but we can only estimate ? if we know which rows are hott. Second, the algorithm computes
all `1 distances between rows at a cost of O(f 2 n). Third, for every row in the matrix, we must
determine its distance to the convex hull of the rows that lie at a sufficient distance; this step requires
us to solve a linear program for each row of the matrix at a cost of ?(f n). Finally, this method is
intimately linked to the choice of the error norm k?k?,1 . It is not obvious how to adapt the algorithm
for other noise models. We present a new approach, based on linear programming, that overcomes
these drawbacks.
3
Main Theoretical Results: NMF by Linear Programming
This paper shows that we can factor an approximately separable nonnegative matrix by solving a
linear program. A major advantage of this formulation is that it scales to very large data sets.
3
d3
3
Algorithm 2 Separable Nonnegative Matrix Factorization by Linear Programming
Require: An f ? n nonnegative matrix Y with a rank-r separable NMF.
Ensure: An f ? r matrix F and r ? n matrix W with F ? 0, W ? 0, and Y = F W .
1: Find the unique C ? ?(Y ) to minimize pT diag(C) where p is any vector with distinct values.
2: Let I = {i : Cii = 1} and set W = YI? and F = C?I .
Here is the key observation: Suppose that Y is any f ? n nonnegative matrix that admits a rank-r
separable factorization Y = F W . If we pad F with zeros to form an f ? f matrix, we have
Ir 0
T
Y =?
?Y =: CY
M 0
We call the matrix C factorization localizing. Note that any factorization localizing matrix C is an
element of the polyhedral set
?(Y ) := {C ? 0 : CY = Y , Tr(C) = r, Cjj ? 1 ?j, Cij ? Cjj ?i, j}.
Thus, to find an exact NMF of Y , it suffices to find a feasible element of C ? ?(Y ) whose
diagonal is integral. This task can be accomplished by linear programming. Once we have such
a C, we construct W by extracting the rows of X that correspond to the indices i where Cii =
1. We construct the feature matrix F by extracting the nonzero columns of C. This approach is
summarized in Algorithm 2. In turn, we can prove the following result.
Theorem 3.1 Suppose Y is a nonnegative matrix with a rank-r separable factorization Y = F W .
Then Algorithm 2 constructs a rank-r nonnegative matrix factorization of Y .
As the theorem suggests, we can isolate the rows of Y that yield a simplicial factorization by solving
a single linear program. The factor F can be found by extracting columns of C.
3.1
Robustness to Noise
Suppose we observe a nonnegative matrix X whose rows sum to one. Assume that X = Y + ?
where Y is a nonnegative matrix whose rows sum to one, which has a rank-r separable factorization
Y = F W such that the rows of W are ?-robust simplicial, and where k?k?,1 ? . Define the
polyhedral set
n
o
?? (X) := C ? 0 : kCX ? Xk?,1 ? ?, Tr(C) = r, Cjj ? 1 ?j, Cij ? Cjj ?i, j
The set ?(X) consists of matrices C that approximately locate a factorization of X. We can prove
the following result.
Theorem 3.2 Suppose that X satisfies the assumptions stated in the previous paragraph. Furthermore, assume that for every row Yj,? that is not hott, we have the margin constraint
kYj,? ?Y
i,? k ? d0
?
?
for all hott rows i. Then we can find a nonnegative factorization satisfying
X ? F W
? 2
?,1
2
0 ,? }
. Furthermore, this factorization correctly identifies the hott topics
provided that < min{?d
9(r+1)
appearing in the separable factorization of Y .
Algorithm 3 requires the solution of two linear programs. The first minimizes a cost vector over
? . Afterward, the matrix F? can be found by setting
?2 (X). This lets us find W
?
F? = arg min
X ? Z W
.
(2)
Z?0
?,1
Our robustness result requires a margin-type constraint assuming that the original configuration
consists either of duplicate hott topics, or topics that are reasonably far away from the hott topics. On
the other hand, under such a margin constraint, we can construct a considerably better approximation
that guaranteed by the AGKM algorithm. Moreover, unlike AGKM, our algorithm does not need to
know the parameter ?.
4
Algorithm 3 Approximably Separable Nonnegative Matrix Factorization by Linear Programming
Require: An f ? n nonnegative matrix X that satisfies the hypotheses of Theorem 3.2.
Ensure: An f ? r matrix F and r ? n matrix W with F ? 0, W ? 0, and kX ? F W k?,1 ? 2.
1: Find C ? ?2 (X) that minimizes pT diag C where p is any vector with distinct values.
2: Let I = {i : Cii = 1} and set W = XI? .
3: Set F = arg minZ?Rf ?r kX ? ZW k?,1
The proofs of Theorems 3.1 and 3.2 can be found in the b version of this paper [6]. The main idea
is to show that we can only represent a hott topic efficiently using the hott topic itself. Some earlier
versions of this paper contained incomplete arguments, which we have remedied. For a signifcantly
stronger robustness analysis of Algorithm 3, see the recent paper [13].
Having established these theoretical guarantees, it now remains to develop an algorithm to solve
the LP. Off-the-shelf LP solvers may suffice for moderate-size problems, but for large-scale matrix
factorization problems, their running time is prohibitive, as we show in Section 5. In Section 4, we
turn to describe how to solve Algorithm 3 efficiently for large data sets.
3.2
Related Work
Localizing factorizations via column or row subset selection is a popular alternative to direct factorization methods such as the SVD. Interpolative decomposition such as Rank-Revealing QR [15]
and CUR [20] have favorable efficiency properties as compared to factorizations (such as SVD) that
are not based on exemplars. Factorization localization has been used in subspace clustering and has
been shown to be robust to outliers [10, 24].
In recent work on dictionary learning, Esser et al. and Elhamifar et al. have proposed a factorization
localization solution to nonnegative matrix factorization using group sparsity techniques [9, 11].
Esser et al. prove asymptotic exact recovery in a restricted noise model, but this result requires
preprocessing to remove duplicate or near-duplicate rows. Elhamifar shows exact representative
recovery in the noiseless setting assuming no hott topics are duplicated. Our work here improves
upon this work in several aspects, enabling finite sample error bounds, the elimination of any need
to preprocess the data, and algorithmic implementations that scale to very large data sets.
4
Incremental Gradient Algorithms for NMF
The rudiments of our fast implementation rely on two standard optimization techniques: dual decomposition and incremental gradient descent. Both techniques are described in depth in Chapters
3.4 and 7.8 of Bertsekas and Tstisklis [5].
We aim to minimize pT diag(C) subject to C ? ?? (X). To proceed, form the Lagrangian
T
L(C, ?, w) = p diag(C) + ?(Tr(C) ? r) +
f
X
wi (kXi? ? [CX]i? k1 ? ? )
i=1
with multipliers ? and w ? 0. Note that we do not dualize out all of the constraints. The remaining
ones appear in the constraint set ?0 = {C : C ? 0, diag(C) ? 1, and Cij ? Cjj for all i, j}.
Dual subgradient ascent solves this problem by alternating between minimizing the Lagrangian over
the constraint set ?0 , and then taking a subgradient step with respect to the dual variables
wi ? wi + s (kXi? ? [C ? X]i? k1 ? ? ) and ? ? ? + s(Tr(C ? ) ? r)
where C ? is the minimizer of the Lagrangian over ?0 . The update of wi makes very little difference
in the solution quality, so we typically only update ?.
We minimize the Lagrangian using projected incremental gradient descent. Note that we can rewrite
the Lagrangian as
?
?
n
X
X
?
L(C, ?, w) = ?? 1T w ? ?r +
wj kXjk ? [CX]jk k1 + ?j (pj + ?)Cjj ? .
k=1
j?supp(X?k )
5
Algorithm 4 H OTTOPIXX: Approximate Separable NMF by Incremental Gradient Descent
Require: An f ? n nonnegative matrix X. Primal and dual stepsizes sp and sd .
Ensure: An f ? r matrix F and r ? n matrix W with F ? 0, W ? 0, and kX ? F W k?,1 ? 2.
1: Pick a cost p with distinct entries.
2: Initialize C = 0, ? = 0
3: for t = 1, . . . , Nepochs do
4:
for i = 1, . . . n do
5:
Choose k uniformly at random from [n].
T
6:
C ? C + sp ? sign(X?k ? CX?k )X?k
? sp diag(? ? (?1 ? p)).
7:
end for
8:
Project C onto ?0 .
9:
? ? ? + sd (Tr(C) ? r)
10: end for
11: Let I = {i : Cii = 1} and set W = XI? .
12: Set F = arg minZ?Rf ?r kX ? ZW k?,1
Here, supp(x) is the set indexing the entries where x is nonzero, and ?j is the number of nonzeros
in row j divided by n. The incremental gradient method chooses one of the n summands at random
and follows its subgradient. We then project the iterate onto the constraint set ?0 . The projection
onto ?0 can be performed in the time required to sort the individual columns of C plus a linear-time
operation. The full procedure is described in the extended version of this paper [6]. In the case
where we expect a unique solution, we can drop the constraint Cij ? Cjj , resulting in a simple
clipping procedure: set all negative items to zero and set any diagonal entry exceeding one to one.
In practice, we perform a tradeoff. Since the constraint Cij ? Cjj is used solely for symmetry
breaking, we have found empirically that we only need to project onto ?0 every n iterations or so.
This incremental iteration is repeated n times in a phase called an epoch. After each epoch, we
update the dual variables and quit after we believe we have identified the large elements of the
diagonal of C. Just as before, once we have identified the hott rows, we can form W by selecting
these rows of X. We can find F just as before, by solving (2). Note that this minimization can
also be computed by incremental subgradient descent. The full procedure, called H OTTOPIXX, is
described in Algorithm 4.
4.1
Sparsity and Computational Enhancements for Large Scale.
For small-scale problems, H OTTOPIXX can be implemented in a few lines of Matlab code. But for
the very large data sets studied in Section 5, we take advantage of natural parallelism and a host
of low-level optimizations that are also enabled by our formulation. As in any numerical program,
memory layout and cache behavior can be critical factors for performance. We use standard techniques: in-memory clustering to increase prefetching opportunities, padded data structures for better
cache alignment, and compiler directives to allow the Intel compiler to apply vectorization.
Note that the incremental gradient step (step 6 in Algorithm 4) only modifies the entries of C where
X?k is nonzero. Thus, we can parallelize the algorithm with respect to updating either the rows
or the columns of C. We store X in large contiguous blocks of memory to encourage hardware
prefetching. In contrast, we choose a dense representation of our localizing matrix C; this choice
trades space for runtime performance.
Each worker thread is assigned a number of rows of C so that all rows fit in the shared L3 cache.
Then, each worker thread repeatedly scans X while marking updates to multiple rows of C. We
repeat this process until all rows of C are scanned, similar to the classical block-nested loop join in
relational databases [22].
5
Experiments
Except for the speedup curves, all of the experiments were run on an identical configuration: a dual
Xeon X650 (6 cores each) machine with 128GB of RAM. The kernel is Linux 2.6.32-131.
6
1
1
1
0.6
0.4
hott
hott (fast)
hott (lp)
AGKM
0.2
20
?
40
hott
hott (fast)
AGKM
0.2
20
?
40
0.8
0.6
0.4
hott
hott (fast)
hott (lp)
AGKM
0.2
20
?
40
60
0.8
0.6
0.4
hott
hott (fast)
AGKM
0.2
0
0
60
1
(d)
Pr(RMSE?? RMSEmin)
Pr(RMSE?? RMSEmin)
0.4
0
0
60
1
0
0
0.6
(c)
100
?
200
0.8
0.6
0.4
hott
hott (fast)
AGKM
0.2
0
0
300
1
(e)
20
?
40
60
(f)
Pr(error?? errormin)
0
0
0.8
Pr(time?? timemin)
0.8
(b)
Pr(error?? errormin)
Pr(error?? errormin)
(a)
0.8
0.6
0.4
hott
hott (fast)
AGKM
0.2
0
0
20
?
40
60
Figure 2: Performance profiles for synthetic data. (a) (?, 1)-norm error for 40 ? 400 sized instances and
(b) all instances. (c) is the performance profile for running time on all instances. RMSE performance profiles
for the (d) small scale and (e) medium scale experiments. (f) (?, 1)-norm error for the ? ? 1. In the noisy
examples, even 4 epochs of H OTTOPIXX is sufficient to obtain competitive reconstruction error.
In small-scale, synthetic experiments, we compared H OTTOPIXX to the AGKM algorithm and the
linear programming formulation of Algorithm 3 implemented in Matlab. Both AGKM and Algorithm 3 were run using CVX [14] coupled to the SDPT3 solver [26]. We ran H OTTOPIXX for 50
epochs with primal stepsize 1e-1 and dual stepsize 1e-2. Once the hott topics were identified, we fit
F using two cleaning epochs of incremental gradient descent for all three algorithms.
To generate our instances, we sampled r hott topics uniformly from the unit simplex in Rn . These
topics were duplicated d times. We generated the remaining f ? r(d + 1) rows to be random convex
combinations of the hott topics, with the combinations selected uniformly at random. We then
?2
added noise with (?, 1)-norm error bounded by ? ? 20+13?
. Recall that AGKM algorithm is only
guaranteed to work for ? < 1. We ran with f ? {40, 80, 160}, n ? {400, 800, 1600}, r ? {3, 5, 10},
d ? {0, 1, 2}, and ? ? {0.25, 0.95, 4, 10, 100}. Each experiment was repeated 5 times.
Because we ran over 2000 experiments with 405 different parameter settings, it is convenient to use
the performance profiles to compare the performance of the different algorithms [7]. Let P be the
set of experiments and A denote the set of different algorithms we are comparing. Let Qa (p) be
the value of some performance metric of the experiment p ? P for algorithm a ? A. Then the
performance profile at ? for a particular algorithm is the fraction of the experiments where the value
of Qa (p) lies within a factor of ? of the minimal value of minb?A Qb (p). That is,
Pa (? ) =
# {p ? P : Qa (p) ? ? mina0 ?A Qa0 (p)}
.
#(P)
In a performance profile, the higher a curve corresponding to an algorithm, the more often it outperforms the other algorithms. This gives a convenient way to contrast algorithms visually.
Our performance profiles are shown in Figure 2. The first two figures correspond to experiments
with f = 40 and n = 400. The third figure is for the synthetic experiments with all other values
of f and n. In terms of (?, 1)-norm error, the linear programming solver typically achieves the
lowest error. However, using SDPT3, it is prohibitively slow to factor larger matrices. On the other
hand, H OTTOPIXX achieves better noise performance than the AGKM algorithm in much less time.
Moreover, the AGKM algorithm must be fed the values of and ? in order to run. H OTTOPIXX does
not require this information and still achieves about the same error performance.
We also display a graph for running only four epochs (hott (fast)). This algorithm is by far the fastest
algorithm, but does not achieve as optimal a noise performance. For very high levels of noise,
however, it achieves a lower reconstruction error than the AGKM algorithm, whose performance
7
data set
jumbo
clueweb
RCV1
features
1600
44739
47153
documents
64000
351849
781265
nonzeros
1.02e8
1.94e7
5.92e7
size (GB)
2.7
0.27
1.14
time (s)
338
478
430
Table 1: Description of the large data sets. Time is to find 100 hott topics on the 12 core machines.
40
16
jumbo
clueweb
20
25
class error
RMSE
speedup
30
14
30
12
10
8
10
20
15
10
6
5
0
0
10
20
threads
30
40
4
0
500
1000 1500 2000 2500
number of topics
0
1000 2000 3000 4000 5000
number of topics
Figure 3: (left) The speedup over a serial implementation for H OTTOPIXX on the jumbo and clueweb data
sets. Note the superlinear speedup for up to 20 threads. (middle) The RMSE for the clueweb data set. (right)
The test error on RCV1 CCAT class versus the number of hott topics. The horizontal line indicates the test
error achieved using all of the features.
degrades once ? approaches or exceeds 1 (Figure 2(f)). We also provide performance profiles for
the root-mean-square error of the nonnegative matrix factorizations (Figure 2 (d) and (e)). The
performance is qualitatively similar to that for the (?, 1)-norm.
We also coded H OTTOPIXX in C++, using the design principles described in Section 4.1, and ran on
three large data sets. We generated a large synthetic example (jumbo) as above with r = 100. We
generated a co-occurrence matrix of people and places from the ClueWeb09 Dataset [2], normalized
by TFIDF. We also used H OTTOPIXX to select features from the RCV1 data set to recognize the
class CCAT [19]. The statistics for these data sets can be found in Table 1.
In Figure 3 (left), we plot the speed-up over a serial implementation. In contrast to other parallel
methods that exhibit memory contention [21], we see superlinear speed-ups for up to 20 threads
due to hardware prefetching and cache effects. All three of our large data sets can be trained in
minutes, showing that we can scale H OTTOPIXX on both synthetic and real data. Our algorithm is
able to correctly identify the hott topics on the jumbo set. For clueweb, we plot the RMSE Figure 3
(middle). This curve rolls off quickly for the first few hundred topics, demonstrating that our algorithm may be useful for dimensionality reduction in Natural Language Processing applications. For
RCV1, we trained an SVM on the set of features extracted by H OTTOPIXX and plot the misclassification error versus the number of topics in Figure 3 (right). With 1500 hott topics, we achieve 7%
misclassification error as compared to 5.5% with the entire set of features.
6
Discussion
This paper provides an algorithmic and theoretical framework for analyzing and deploying any factorization problem that can be posed as a linear (or convex) factorization localizing program. Future
work should investigate the applicability of H OTTOPIXX to other factorization localizing algorithms,
such as subspace clustering, and should revisit earlier theoretical bounds on such prior art.
Acknowledgments
The authors would like to thank Sanjeev Arora, Michael Ferris, Rong Ge, Nicolas Gillis, Ankur
Moitra, and Stephen Wright for helpful suggestions. BR is generously supported by ONR award
N00014-11-1-0723, NSF award CCF-1139953, and a Sloan Research Fellowship. CR is generously
supported by NSF CAREER award under IIS-1054009, ONR award N000141210041, and gifts or
research awards from American Family Insurance, Google, Greenplum, and Oracle. JAT is generously supported by ONR award N00014-11-1002, AFOSR award FA9550-09-1-0643, and a Sloan
Research Fellowship.
8
References
[1] docs.oracle.com/cd/B28359_01/datamine.111/b28129/algo_nmf.htm.
[2] lemurproject.org/clueweb09/.
[3] www.mathworks.com/help/toolbox/stats/nnmf.html.
[4] S. Arora, R. Ge, R. Kannan, and A. Moitra. Computing a nonnegative matrix factorization ? provably. To
appear in STOC 2012. Preprint available at \arxiv.org/abs/1111.0952, 2011.
[5] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Athena
Scientific, Belmont, MA, 1997.
[6] V. Bittorf, B. Recht, C. R?e, and J. A. Tropp. Factoring nonnegative matrices with linear programs. Technical Report. Available at arxiv.org/1206.1270, 2012.
[7] E. D. Dolan and J. J. Mor?e. Benchmarking optimization software with performance profiles. Mathematical Programming, Series A, 91:201?213, 2002.
[8] D. Donoho and V. Stodden. When does non-negative matrix factorization give a correct decomposition
into parts? In Advances in Neural Information Processing Systems, 2003.
[9] E. Elhamifar, G. Sapiro, and R. Vidal. See all by looking at a few: Sparse modeling for finding representative objects. In Proceedings of CVPR, 2012.
[10] E. Elhamifar and R. Vidal. Sparse subspace clustering. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, 2009.
[11] E. Esser, M. M?oller, S. Osher, G. Sapiro, and J. Xin. A convex model for non-negative matrix factorization
and dimensionality reduction on physical space. IEEE Transactions on Image Processing, 2012. To
appear. Preprint available at arxiv.org/abs/1102.0844.
[12] R. Gaujoux and C. Seoighe. NMF: A flexible R package for nonnegative matrix factorization. BMC
Bioinformatics, 11:367, 2010. doi:10.1186/1471-2105-11-367.
[13] N. Gillis. Robustness analysis of hotttopixx, a linear programming model for factoring nonnegative matrices. arxiv.org/1211.6687, 2012.
[14] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http:
//cvxr.com/cvx, May 2010.
[15] M. Gu and S. C. Eisenstat. Efficient algorithms for computing a strong rank-revealing QR factorization.
SIAM Journal on Scientific Computing, 17:848?869, 1996.
[16] T. Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd Annual International
SIGIR Conference on Research and Development in Information Retrieval, 1999.
[17] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature,
401:788?791, 1999.
[18] D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In Advances in Neural
Information Processing Systems, 2001.
[19] D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization
research. Journal of Machine Learning Research, 5:361?397, 2004.
[20] M. W. Mahoney and P. Drineas. CUR matrix decompositions for improved data analysis. Proceedings of
the National Academy of Sciences, 106:697?702, 2009.
[21] F. Niu, B. Recht, C. R?e, and S. J. Wright. HOGWILD!: A lock-free approach to parallelizing stochastic
gradient descent. In Advances in Neural Information Processing Systems, 2011.
[22] L. D. Shapiro. Join processing in database systems with large main memories. ACM Transactions on
Database Systems, 11(3):239?264, 1986.
[23] P. Smaragdis. Non-negative matrix factorization for polyphonic music transcription. In IEEE Workshop
on Applications of Signal Processing to Audio and Acoustics, pages 177?180, 2003.
[24] M. Soltanolkotabi and E. J. Cand`es. A geometric analysis of subspace clustering with outliers. Preprint
available at arxiv.org/abs/1112.4258, 2011.
[25] L. B. Thomas. Problem 73-14, rank factorization of nonnegative matrices. SIAM Review, 16(3):393?394,
1974.
[26] K. C. Toh, M. Todd, and R. H. T?ut?unc?u.
SDPT3:
ware
package
for
semidefinite-quadratic-linear
programming.
http://www.math.nus.edu.sg/?mattohkc/sdpt3.html.
A
MATLAB
Available
softfrom
[27] S. A. Vavasis. On the complexity of nonnegative matrix factorization. SIAM Joural on Optimization,
20(3):1364?1377, 2009.
9
| 4518 |@word middle:2 version:4 polynomial:2 norm:8 stronger:1 nd:1 d2:2 seek:1 decomposition:5 pick:1 sgd:2 tr:5 reduction:2 configuration:2 contains:4 series:1 selecting:2 document:2 outperforms:1 comparing:1 com:3 toh:1 must:4 belmont:1 numerical:2 hofmann:1 remove:1 drop:1 plot:3 update:4 polyphonic:1 prohibitive:1 selected:1 item:1 xk:3 ith:1 core:2 fa9550:1 provides:1 math:1 bittorf:3 org:6 mathematical:3 direct:1 prove:4 consists:2 polyhedral:2 paragraph:1 introduce:1 hardness:1 indeed:2 behavior:1 cand:1 actual:2 little:1 cache:4 solver:4 prefetching:3 conv:1 provided:2 project:3 notation:1 moreover:2 suffice:1 medium:1 gift:1 bounded:1 lowest:1 cm:1 substantially:1 minimizes:2 developed:1 finding:1 guarantee:2 sapiro:2 every:3 runtime:1 exactly:1 prohibitively:1 demonstrates:1 unit:2 grant:1 appear:3 producing:1 bertsekas:2 before:2 todd:1 sd:2 consequence:1 analyzing:1 parallelize:1 niu:1 solely:1 ware:1 approximately:2 might:1 plus:1 studied:1 ankur:1 suggests:1 challenging:1 co:1 fastest:1 factorization:53 limited:1 unique:4 acknowledgment:1 yj:1 practice:5 block:2 procedure:3 revealing:3 projection:1 convenient:2 word:1 ups:1 boyd:1 numbered:1 onto:4 undesirable:1 selection:3 superlinear:2 unc:1 www:2 lagrangian:5 modifies:1 layout:1 convex:11 sigir:1 recovery:2 stats:1 eisenstat:1 enabled:1 pt:3 suppose:4 exact:5 programming:12 cleaning:1 us:1 hypothesis:3 pa:1 element:4 expensive:1 approximated:2 satisfying:2 jk:1 updating:1 recognition:1 database:4 preprint:3 worst:1 calculate:1 cy:2 wj:1 trade:1 e8:1 ran:4 rose:1 benjamin:1 complexity:1 seung:2 motivate:1 trained:2 solving:3 rewrite:1 localization:2 upon:1 efficiency:1 gu:1 drineas:1 htm:1 chapter:1 distinct:3 fast:8 describe:2 doi:1 whose:6 heuristic:2 posed:2 solve:6 larger:1 cvpr:1 otherwise:1 reconstruct:2 statistic:1 itself:1 noisy:1 advantage:2 reconstruction:2 loop:1 achieve:2 academy:1 description:1 qr:3 cluster:3 enhancement:1 produce:1 categorization:1 incremental:9 object:2 help:1 develop:2 clueweb:5 exemplar:1 ij:1 multicore:1 keywords:1 strong:1 solves:1 implemented:3 c:3 drawback:1 correct:3 stochastic:2 hull:6 nnmf:1 elimination:1 bin:2 require:4 suffices:1 tfidf:1 quit:1 rong:1 wright:2 visually:1 algorithmic:4 predict:1 major:2 dictionary:2 achieves:4 uniqueness:2 favorable:1 successfully:1 hope:1 minimization:1 generously:3 aim:1 e7:2 shelf:1 cr:1 stepsizes:1 derived:1 rank:14 indicates:1 contrast:4 rigorous:1 sense:1 helpful:1 factoring:3 typically:3 entire:1 pad:1 selects:1 djk:1 provably:2 arg:4 dual:7 flexible:1 subexponential:1 html:2 priori:1 development:2 art:1 initialize:2 orange:1 once:4 construct:4 having:1 identical:1 bmc:1 future:1 simplex:1 report:1 np:2 simplify:1 others:1 few:6 duplicate:3 recognize:1 national:1 individual:1 phase:1 ab:3 interest:1 mining:2 investigate:1 insurance:1 joel:1 alignment:1 mahoney:1 semidefinite:1 primal:2 damped:1 integral:1 capable:1 encourage:1 necessary:1 worker:2 unless:1 incomplete:1 circle:2 theoretical:9 minimal:1 instance:7 xeon:1 column:7 earlier:3 modeling:4 contiguous:1 localizing:6 clipping:1 cost:4 applicability:1 entry:5 subset:1 snr:1 hundred:1 dij:1 too:1 answer:1 kxi:3 synthetic:6 considerably:1 chooses:1 recht:3 international:1 siam:3 probabilistic:1 off:2 lee:2 michael:1 quickly:1 sanjeev:1 linux:1 moitra:3 choose:3 admit:1 agkm:27 american:1 li:1 supp:2 potential:1 summarized:1 includes:1 matter:1 sloan:2 vi:2 performed:1 root:1 hogwild:1 linked:1 compiler:2 competitive:1 recover:1 sort:1 parallel:3 rmse:6 contribution:2 minimize:3 square:1 ir:2 roll:1 efficiently:3 simplicial:11 yield:2 identify:2 correspond:2 preprocess:1 accurately:1 ccat:2 notoriously:1 deploying:1 definition:4 obvious:1 proof:1 mi:1 di:1 workstation:1 cur:2 sampled:1 proved:1 dataset:1 popular:2 duplicated:2 recall:1 knowledge:2 ut:1 improves:2 dimensionality:2 directive:1 appears:1 higher:1 disciplined:1 improved:1 formulation:6 furthermore:3 just:2 until:1 hand:2 horizontal:1 tropp:3 christopher:1 dualize:1 google:1 quality:1 scientific:2 believe:2 effect:1 normalized:2 concept:1 multiplier:1 ccf:1 assigned:1 alternating:1 nonzero:3 semantic:1 complete:1 image:1 contention:1 recently:1 superior:1 common:1 empirically:1 physical:1 mor:1 rd:1 soltanolkotabi:1 language:1 esser:3 l3:1 x650:1 summands:1 add:1 recent:3 showed:1 moderate:1 driven:2 store:1 kcx:1 n00014:2 onr:3 yi:1 accomplished:1 victor:1 caltech:1 additional:2 spectrogram:1 cii:4 determine:1 signal:1 stephen:1 ii:1 full:2 multiple:1 nonzeros:2 stem:1 d0:1 exceeds:1 technical:1 faster:1 adapt:1 gesture:1 cjj:8 retrieval:1 divided:1 host:1 serial:2 award:7 coded:1 impact:2 scalable:2 circumstance:1 vision:2 noiseless:1 metric:1 arxiv:5 iteration:2 represent:1 kernel:1 achieved:1 justified:1 addition:1 fellowship:2 hott:39 zw:3 unlike:1 exhibited:2 minb:1 sure:1 ascent:1 isolate:1 subject:1 spirit:1 call:2 extracting:3 near:1 yang:1 gillis:2 variety:1 xj:2 iterate:1 fit:2 brecht:1 identified:3 idea:4 tradeoff:1 br:1 whether:1 thread:5 sdpt3:4 gb:2 proceed:1 repeatedly:1 matlab:6 qa0:1 useful:1 stodden:3 hardware:2 generate:1 vavasis:3 http:2 shapiro:1 nsf:2 revisit:1 sign:1 correctly:2 write:2 express:4 group:1 key:2 salient:1 four:1 interpolative:2 nevertheless:1 demonstrating:1 drawn:1 d3:2 wisc:3 pj:1 v1:1 ram:1 graph:1 subgradient:4 padded:1 fraction:1 sum:4 run:4 package:3 extends:1 place:1 family:1 decide:1 cvx:3 doc:1 bound:5 guaranteed:3 display:1 smaragdis:1 quadratic:1 nonnegative:37 oracle:3 annual:1 adapted:2 scanned:1 precisely:2 constraint:12 software:3 aspect:1 speed:2 argument:2 extremely:1 min:3 qb:1 rcv1:5 separable:18 speedup:4 developing:1 marking:1 combination:3 describes:2 intimately:1 wi:4 lp:4 oller:1 osher:1 outlier:2 restricted:1 indexing:4 pr:6 computationally:2 remains:2 turn:2 count:1 mathworks:1 know:3 ge:3 fed:1 end:3 available:5 operation:2 ferris:1 vidal:2 apply:1 observe:1 away:2 appropriate:2 appearing:1 robustly:2 distinguished:1 stepsize:2 alternative:1 encounter:1 robustness:4 occurrence:1 existence:1 thomas:2 original:1 clustering:6 remaining:6 ensure:4 include:1 running:3 opportunity:1 lock:1 music:1 exploit:1 k1:4 classical:1 added:1 degrades:1 diagonal:3 exhibit:1 gradient:9 subspace:5 distance:6 thank:1 remedied:1 athena:1 topic:26 argue:1 kannan:2 assuming:2 code:1 index:2 minimizing:1 difficult:1 cij:5 stoc:1 stated:2 negative:6 jat:1 implementation:6 design:2 proper:1 perform:1 observation:1 datasets:2 mina0:1 benchmark:1 enabling:1 finite:1 descent:7 extended:1 relational:1 looking:1 locate:1 rn:1 parallelizing:1 nmf:27 required:1 toolbox:1 optimized:1 california:1 acoustic:1 established:1 nu:1 qa:3 address:1 able:1 parallelism:1 pattern:1 departure:1 regime:1 sparsity:2 program:9 rf:3 including:2 max:1 memory:5 critical:1 misclassification:2 difficulty:1 rely:1 natural:2 technology:1 identifies:2 arora:4 coupled:1 text:2 review:1 sg:1 geometric:1 understanding:1 literature:1 epoch:6 prior:1 asymptotic:1 wisconsin:1 afosr:1 dolan:1 expect:1 permutation:1 suggestion:1 afterward:2 versus:2 localized:1 sufficient:6 principle:1 cd:1 row:40 repeat:1 supported:3 free:1 jth:1 tsitsiklis:1 weaker:1 allow:2 institute:1 wide:1 taking:1 sparse:2 distributed:1 curve:3 depth:1 kyj:1 computes:2 author:1 collection:2 qualitatively:1 preprocessing:2 projected:1 far:2 transaction:2 approximate:1 transcription:1 overcomes:1 tolerant:1 sat:1 xi:2 search:1 vectorization:1 latent:1 table:2 nature:1 reasonably:1 robust:5 nicolas:1 career:1 symmetry:1 vj:2 diag:6 sp:3 main:3 dense:1 arrow:1 noise:9 profile:9 cvxr:1 repeated:2 representative:2 intel:1 join:2 benchmarking:1 slow:1 vr:1 exceeding:1 mattohkc:1 exercise:1 lie:3 breaking:1 minz:3 third:2 minute:3 theorem:6 showing:1 admits:3 svm:1 evidence:1 exists:2 workshop:1 n000141210041:1 magnitude:1 illustrates:1 elhamifar:4 margin:4 nk:2 gap:1 kx:5 jumbo:5 cx:4 contained:1 mij:1 nested:1 minimizer:1 satisfies:4 lewis:1 extracted:1 ma:1 acm:1 sized:1 donoho:3 shared:1 feasible:1 hard:1 except:1 uniformly:3 called:3 experimental:1 svd:2 succeeds:1 xin:1 meaningful:1 e:1 select:1 support:1 mark:1 people:1 scan:1 bioinformatics:1 audio:2 d1:3 |
3,887 | 4,519 | Adaptive Strati?ed Sampling for Monte-Carlo
integration of Differentiable functions
Alexandra Carpentier
Statistical Laboratory, CMS
Wilberforce Road, Cambridge
CB3 0WB UK
[email protected]
R?emi Munos
INRIA Lille - Nord Europe
40, avenue Halley
59000 Villeneuve d?ascq, France
[email protected]
Abstract
We consider the problem of adaptive strati?ed sampling for Monte Carlo integration of a differentiable function given a ?nite number of evaluations to the function. We construct a sampling scheme that samples more often in regions where
the function oscillates more, while allocating the samples such that they are well
spread on the domain (this notion shares similitude with low discrepancy). We
prove that the estimate returned by the algorithm is almost similarly accurate as
the estimate that an optimal oracle strategy (that would know the variations of the
function everywhere) would return, and provide a ?nite-sample analysis.
1
Introduction
In this paper we consider the problem of numerical integration of a differentiable function f :
[0, 1]d ? R given a ?nite budget n of evaluations to the function that can be allocated sequentially.
A usual technique for reducing the mean squared error (w.r.t. the integral of f ) of a Monte-Carlo estimate is the so-called strati?ed Monte Carlo sampling, which considers sampling into a set of strata,
or regions of the domain, that form a partition, i.e. a strati?cation, of the domain (see [10][Subsection
5.5] or [6]). It is ef?cient (up to rounding issues) to stratify the domain, since when allocating to
each stratum a number of samples proportional to its measure, the mean squared error of the resulting estimate is always smaller or equal to the one of the crude Monte-Carlo estimate (that samples
uniformly the domain).
Since the considered functions are differentiable, if the domain is strati?ed in K hyper-cubic strata of
same measure and if one assigns uniformly at random n/K samples per stratum, the mean squared
error of the resulting strati?ed estimate is in O(n?1 K ?2/d ). We deduce that if the strati?cation
is built independently of the samples (before collecting the samples), and if n is known from the
beginning (which is assumed here), the minimax-optimal choice for the strati?cation is to build n
strata of same measure and minimal diameter, and to assign only one sample per stratum uniformly
at random. We refer to this sampling technique as Uniform strati?ed Monte-Carlo. The resulting
estimate has a mean squared error of order O(n?(1+2/d) ). The arguments that advocate for stratifying in strata of same measure and minimal diameter are closely linked to the reasons why quasi
Monte-Carlo methods, or low discrepancy sampling schemes are ef?cient techniques for integrating
smooth functions. See [9] for a survey on these techniques.
It is minimax-optimal to stratify the domain in n strata and sample one point per stratum, but it
would also be interesting to adapt the strati?cation of the space with respect to the function f . For
example, if the function has larger variations in a region of the domain, we would like to discretize
the domain in smaller strata in this region, so that more samples are assigned to this region. Since
f is initially unknown, it is not possible to design a good strati?cation before sampling. However
an ef?cient algorithm should allocate the samples in order to estimate online the variations of the
1
function in each region of the domain while, at the same time, allocating more samples in regions
where f has larger local variations.
The papers [5, 7, 3] provide algorithms for solving a similar trade-off when the strati?cation is ?xed:
these algorithms allocate more samples to strata in which the function has larger variations. It is,
however, clear that the larger the number of strata, the more dif?cult it is to allocate the samples
almost optimally in the strata.
Contributions: We propose a new algorithm, Lipschitz Monte-Carlo Upper Con?dence Bound
(LMC-UCB), for tackling this problem. It is a two-layered algorithm. It ?rst strati?es the domain
in K ? n strata, and then allocates uniformly to each stratum an initial small amount of samples
in order to estimate roughly the variations of the function per stratum. Then our algorithm substrati?es each of the K strata according to the estimated local variations, so that there are in total
approximately n sub-strata, and allocates one point per sub-stratum. In that way, our algorithm
discretizes the domain into more re?ned strata in regions where the function has higher variations.
It cumulates the advantages of quasi Monte-Carlo and adaptive strategies.
More precisely, our contributions are the following:
? We prove an asymptotic lower bound on the mean squared error of the estimate returned by
an optimal oracle strategy that has access to the variations of the function f everywhere and
would use the best strati?cation of the domain with hyper-cubes (possibly of heterogeneous
sizes). This quantity, since this is a lower-bound on any oracle strategies, is smaller than
the mean squared error of the estimate provided by Uniform strati?ed Monte-Carlo (which
is the non-adaptive minimax-optimal strategy on the class of differentiable functions), and
also smaller than crude Monte-Carlo.
? We introduce the algorithm LMC-UCB, that sub-strati?es the K strata in hyper-cubic substrata, and samples one point per sub-stratum. The number of sub-strata per stratum is
linked to the variations of the function in the stratum. We prove that algorithm LMC-UCB
is asymptotically as ef?cient as the optimal oracle strategy. We also provide ?nite-time
results when f admits a Taylor expansion of order 2 in every point. By tuning the number
of strata K wisely, it is possible to build an algorithm that is almost as ef?cient as the
optimal oracle strategy.
The paper is organized as follows. Section 2 de?nes the notations used throughout the paper. Section 3 states the asymptotic lower bound on the mean squared error of the optimal oracle strategy.
In this Section, we also provide an intuition on how the number of samples into each stratum should
be linked to the variation of the function in the stratum in order for the mean squared error of the
estimate to be small. Section 4 presents the algorithm LMC-UCB and the ?rst Lemma on how many
sub-strata are built in the initial strata. Section 5 ?nally states that the algorithm LMC-UCB is almost as ef?cient as the optimal oracle strategy. We ?nally conclude the paper. Due to the lack of
space, we also provide experiments and proofs in the Supplementary Material (see also [2]).
2
Setting
We consider a function f : [0, 1]d ? R.? We want to estimate as accurately as possible its integral
according to the Lebesgue measure, i.e. [0,1]d f (x)dx. In order to do that, we consider algorithms
that stratify the domain in two layers of strata, one more re?ned than the other. The strata of the
re?ned layer are referred to as sub-strata, and we sample in the sub-strata. We will compare the
performances of the algorithms we construct, with the performances of the optimal oracle algorithm
that has access to the variations ||?f (x)||2 of the function f everywhere in the domain, and is
allowed to sample the domain where it wishes.
The ?rst step is to partition the domain [0, 1]d in K measurable strata. In this paper, we assume
that K 1/d is an integer1 . This enables us to partition, in a natural way, the domain in K hyper-cubic
1
d
strata (?k )k?K of same measure wk = K
. Each of these strata is a region
? of the domain [0, 1] ,
1
and the K strata form a partition of the domain. We write ?k = wk ?k f (x)dx the mean and
?2
? ?
?k2 = w1k ?k f (x) ? ?k dx the variance of a sample of the function f when sampling f at a point
chosen at random according to the Lebesgue measure conditioned to stratum ?k .
1
This is not restrictive in small dimension, but it may become more constraining for large d.
2
We possess a budget of n samples (which is assumed to be known in advance), which means that
we can sample n times the function at any point of [0, 1]d . We denote by A an algorithm that
sequentially allocates the budget by sampling at round t in the stratum indexed by kt ? {1, . . . , K},
and returns after all n samples have been used an estimate ?
?n of the integral of the function f .
We consider strategies that sub-partition each stratum ?k in hyper-cubes of same measure in ?k , but
of heterogeneous measure among the ?k . In this way, the number of sub-strata in each stratum ?k
can adapt to the variations f within ?k . The algorithms that we consider return a sub-partition of
each stratum ?k in Sk sub-strata. We call Nk = (?k,i )i?Sk the sub-partition of stratum ?k . In each
of these sub-strata, the algorithm allocates at least one point2 . We write Xk,i the ?rst point sampled
uniformly at random in sub-stratum ?k,i . We write wk,i the measure of the sub-stratum ?k,i . Let us
?
?2
?
?
2
= w1k,i ?k,i f (x) ? ?k,i dx the variance of a
write ?k,i = w1k,i ?k,i f (x)dx the mean and ?k,i
sample of f in sub-stratum ?k,i (e.g. of Xk,i = f (Uk,i ) where Uk,i ? U?k,i ).
This class of 2?layered sampling strategies is rather large. In fact it contains strategies that are
similar to low discrepancy strategies, and also to any strati?ed Monte-Carlo strategy. For example,
1
consider that all K strata are hyper-cubes of same measure K
and that each stratum ?k is partitioned
1
into Sk hyper-rectangles ?k,i of minimal diameter and same measure KS
. If the algorithm allocates
k
one point per sub-stratum, its sampling scheme shares similarities with quasi Monte-Carlo sampling
schemes, since the points at which the function is sampled are well spread.
Let us now consider an algorithm that ?rst chooses the sub-partition (Nk )k and then allocates deterministically 1 sample uniformly at random in each sub-stratum ?k,i . We consider the strati?ed
?K ?Sk wk,i
estimate ?
?n = k=1 i=1
Sk Xk,i of ?. We have
?
S
Sk ?
K
k
? ? wk,i
??
E(?
?n ) =
?k,i =
f (x)dx =
f (x)dx = ?,
Sk
[0,1]d
i=1
i=1 ?k,i
k=1
and also
V(?
?n ) =
k?K
?
Sk
?
k?K i=1
(
Sk
2
??
wk,i
wk,i 2
2
) E(Xk,i ? ?k,i )2 =
2 ?k,i .
Sk
S
k
i=1
k?K
For a given algorithm A that builds for each stratum k a sub-partition Nk = (?k,i )i?Sk , we call
pseudo-risk the quantity
Sk
2
??
wk,i
2
(1)
Ln (A) =
2 ?k,i .
S
k
i=1
k?K
Some further insight on this quantity is provided in the paper [4].
Consider now the uniform strategy, i.e. a strategy that divides the domain in K = n hyper-cubic
strata. This strategy is a fairly natural, minimax-optimal static strategy, on the class of differentiable
function de?ned on [0, 1]d , when no information on f is available. We will prove in the next Section
that its asymptotic mean squared error is equal to
?
? 1
1?
||?f (x)||22 dx 1+ 2 .
12 [0,1]d
n d
This quantity is of order n?1?2/d , which is smaller, as expected, than 1/n: this strategy is more
ef?cient than crude Monte-Carlo.
We will also prove in the next Section that the minimum asymptotic mean squared error of an
optimal oracle strategy (we call it ?oracle? because it builds the strati?cation using the information
about the variations ||?f (x)||2 of f in every point x), is larger than
?
?2 (d+1)
d
1
1?
d
(||?f (x)||2 ) d+1 dx
2
1+
12 [0,1]d
n d
This quantity is always smaller than the asymptotic mean squared error of the Uniform strati?ed
Monte-Carlo strategy, which makes sense since this strategy assumes the knowledge of the variations
of f everywhere, and can thus adapt accordingly the number of samples in each region. We de?ne
?
?2 (d+1)
d
1?
d
(||?f (x)||2 ) d+1 dx
.
(2)
?=
12 [0,1]d
2
This implies that
?
k
Sk ? n.
3
Given this minimum asymptotic mean squared error of an optimal oracle strategy, we de?ne the
pseudo-regret of an algorithm A as
1
(3)
Rn (A) = Ln (A) ? ? 1+ 2 .
n d
This pseudo-regret is the difference between the pseudo-risk of the estimate provided by algorithm
A, and the lower-bound on the optimal oracle mean squared error. In other words, this pseudo-regret
is the price an adaptive strategy pays for not knowing in advance the function f , and thus not having
access to its variations. An ef?cient adaptive strategy should aim at minimizing this gap coming
from the lack of informations.
3
Discussion on the optimal asymptotic mean squared error
3.1
Asymptotic lower bound on the mean squared error, and comparison with the Uniform
strati?ed Monte-Carlo
A ?rst part of the analysis of the exposed problem consists in ?nding a good point of comparison
for the pseudo-risk. The following Lemma states an asymptotic lower bound on the mean squared
error of the optimal oracle sampling strategy.
?
?
?
Lemma 1 Assume that f is such that ?f is continuous and ||?f (x)||22 dx < ?. Let (?nk )k?n n
be an arbitrary sequence of partitions of [0, 1]d in n strata such that all the strata are hyper-cubes,
and such that the maximum diameter of each stratum goes to 0 as n ? +? (but the strata are
allowed to have heterogeneous measures).Let ?
?n be the strati?ed estimate of the function for the
partition (?nk )k?n when there is one point pulled at random per stratum. Then
lim inf n1+2/d V(?
?n ) ? ?.
n??
The full proof of this Lemma is in the Supplementary Material, Appendix B (see also [2]).
We have also the following equality for the asymptotic mean squared error of the uniform strategy.
?
Lemma 2 Assume that f is such that ?f is continuous and ||?f (x)||22 dx < ?. For any n = ld
such that l is an integer (and thus
? such that
? it is possible to partition the domain in n hyper-cubic
strata of same measure), de?ne (?nk )k?n n as the sequence of partitions in hyper-cubic strata of
same measure 1/n. Let ?
?n be the strati?ed estimate of the function for the partition (?nk )k?n when
there is one point pulled at random per stratum. Then
?
?
1?
lim inf n1+2/d V(?
?n ) =
||?f (x)||22 dx .
n??
12 [0,1]d
The proof of this Lemma is substantially similar to the proof of Lemma 1 in the Supplementary
Material, Appendix B (see also [2]). The only difference is that the measure of each stratum ?nk
is 1/n and that in Step 2, instead of Fatou?s Lemma, the Theorem of dominated convergence is
required.
The optimal rate for the mean squared error, which is also the rate of the Uniform strati?ed MonteCarlo in Lemma 2, is n?1?2/d and is attained with ideas of low discrepancy sampling. The constant
can however be improved (with respect to the constant in Lemma 2), by adapting to the speci?c
shape of each
? 1, we exhibit a lower bound for this constant (and without
? ?function. In Lemma
1
2
surprises, 12 [0,1]d ||?f (x)||2 dx ? ?). Our aim is to build an adaptive sampling scheme, also
sharing ideas with low discrepancy sampling, that attains this lower-bound.
?
?
There is one main restriction in both Lemma: we impose that the sequence of partitions (?nk )k?n n
is composed only with strata that have the shape of an hyper-cube. This assumption is in fact
reasonable: indeed, if the shape of the strata could be arbitrary, one could take the level sets (or
approximate level sets as the number of strata is limited by n) as strata, and this would lead to
limn?? inf ? n1+2/d V(?
?n,? ) = 0. But this is not a fair competition, as the function is unknown,
and determining these level sets is actually a much harder problem than integrating the function.
The fact that the strata are hyper-cubes appears, in fact, in the bound. If we had chosen other shapes,
1
e.g. l2 balls, the constant 12
in front of the bounds in both Lemma would change3 . It is however not
3
The
1
12
comes from computing the variance of an uniform random variable on [0, 1].
4
possible to make a ?nite partition in l2 balls of [0, 1]d , and we chose hyper-cubes since it is quite
easy to stratify [0, 1]d in hyper-cubic strata.
d
The proof of Lemma 1 makes the quantity s? (x) =
?
(||?f (x)||2 ) d+1
d
[0,1]d
(||?f (u)||2 ) d+1 du
appear. This quantity is
proposed as ?asymptotic optimal allocation?, i.e. the asymptotically optimal number of sub-strata
one would ideally create in any small sub-stratum centered in x. This is however not very useful for
building an algorithm. The next Subsection provides an intuition on this matter.
3.2 An intuition of a good allocation: Piecewise linear functions
In this Subsection, we (i) provide an example where the asymptotic optimal mean squared error is
also the optimal mean squared error at ?nite distance and (ii) provide explicitly what is, in that case,
a good allocation. We do that in order to give an intuition for the algorithm that we introduce in the
next Section.
We consider a partition in K hyper-cubic strata??k . Let us assume
that the function f is af?ne on all
?
strata ?k , i.e. on stratum ?k , we have f (x) = ??k , x? + ?k I {x ? ?k }. In that case ?k = f (ak )
where ak is the center of the stratum ?k . We then have:
?k2
1
=
wk
?
1
(f (x) ? f (ak )) dx =
w
k
?k
2
?
?k
?
??k , (x ? ak )?
?2
dx =
||?k ||22 2/d
1 ? ||?k ||22 1+2/d ?
wk
wk .
=
wk
12
12
1/d
We consider also a sub-partition of ?k in Sk hyper-cubes of same size (we assume that Sk is
an integer), and we assume that in each sub-stratum ?k,i , we sample one point. We also have
||?k ||22 ? wk ?2/d
2
?k,i
= 12
for sub-stratum ?k,i .
Sk
For a given k and a given Sk , all the ?k,i are equals. The pseudo-risk of an algorithm A that divides
each stratum ?k in Sk sub-strata is thus
Ln (A) =
? w2
? ? w2 ||?k ||2 ? wk ?2/d
? w2+2/d ||?k ||2
2
2
k
k
k
=
=
?2 .
1+2/d
1+2/d k
Sk2 12
Sk
12
S
S
k?K i?Sk
k?K k
k?K k
If an unadaptive algorithm A? has access to the variances ?k2 in the strata, it can choose to allocate
the budget in order to minimize the pseudo-risk. After solving the simple optimization problem
of minimizing Ln (A) with respect to (Sk )k , we deduce that an optimal oracle strategy on this
d
strati?cation would divide each stratum k in Sk? =
for this strategy is then
?
Ln,K (A ) =
where we write ?K =
?
??
i?K (wi ?i )
d
d+1
?
k?K (wk ?k )
(wk ?k ) d+1
i?K (wi ?i )
d
d+1
n1+2/d
?2 (d+1)
d
d
d+1
n sub-strata4 . The pseudo-risk
2
(d+1)
? d
= K
,
n1+2/d
(4)
. We will call in the paper optimal proportions the quantities
d
?K,k = ?
(wk ?k ) d+1
i?K (wi ?i )
d
d+1
(5)
.
In the speci?c case of functions that are piecewise linear, we have ?K =
d
?
?
d
(||?f (x)||2 ) d+1
||?k ||2 1/d d+1
?
(w
w
)
=
dx. We thus have
k 2 3
d
k?K
k
[0,1]d
?
k?K (wk ?k )
d
d+1
=
12 2(d+1)
Ln,K (A? ) = ?
1
2
n1+ d
.
(6)
This optimal oracle strategy attains the lower bound in Lemma 1. We will thus construct, in the next
Section, an algorithm that learns and adapts to the optimal proportions de?ned in Equation 5.
4
We deliberately forget about rounding issues in this Subsection. The allocation we provide might not be
realizable (e.g. if Sk? is not an integer), but plugging it in the bound provides a lower bound on any realizable
performance.
5
4
The Algorithm LMC-UCB
4.1 Algorithm LMC-UCB
We present the algorithm Lipschitz Monte Carlo Upper Con?dence Bound (LM C ? U CB). It takes
as parameter a partition (?k )k?K in K ? n hyper-cubic strata of same measure 1/K (it is possible
since we assume that ?l ? N/ld = K). It also takes as parameter an uniform upper bound L on
||?f (x)||22 , and ?, a (small) probability. The aim of algorithm LM C ? U CB is to sub-stratify each
d
stratum ?k in ?K,k =
(wk ?k ) d+1
d
?K
d+1
i=1 (wi ?i )
n hyper-cubic sub-strata of same measure and sample one
point per sub-stratum. An intuition on why this target is relevant was provided in Section 3.
?
?
?? ? d ?1/d d
n d+1
?
hyperAlgorithm LMC-UCB starts by sub-stratifying each stratum ?k in S =
K
cubic strata of same measure. It is possible to do that since by de?nition, S?1/d is an integer. We
write this ?rst sub-strati?cation Nk? = (??k,i )i?S? . It then pulls one sample per sub-stratum in Nk? for
each ?k .
It then sub-strati?es again each stratum ?k using the informations collected. It sub-strati?es each
stratum ?k in
d
? ? d+1
d ?
?
??
?
? wkd+1 ?
?1/d d
?k,K S? + A( wS?k )1/d S1?
?
?
Sk = max
,S
(7)
d (n ? K S)
? ? d+1
d ?
?K
d+1
wi 1/d
1
?
?i,K S? + A( S? )
?
i=1 wi
S
hyper-cubic strata of same measure (see Figure 1 for a de?nition of A). It is possible to do that
1/d
because by de?nition, Sk is an integer. We call this sub-strati?cation of stratum ?k strati?cation
Nk = (?k,i )i?Sk . In the last Equation, we compute the empirical standard deviation in stratum ?k
at time K S? as
?
?
? ?
?
S
S
?2
? 1 ?
1?
?
?k,K S? = ? ?
Xk,i ? ?
Xk,j .
(8)
S?1
S
i=1
j=1
Algorithm LMC-UCB then
?samples in each sub-stratum ?k,i one point. It is possible to do that
since, by de?nition of Sk , k Sk + K S? ? n
The algorithm outputs an estimate ?
?n of the integral of f , computed with the ?rst point in each
sub-stratum of partition Nk . We present in Figure 1 the pseudo-code of algorithm LMC-UCB.
? ?
Input: Partition (?k )k?K , L, ?, set A = 2L d log(2K/?)
Initialize: ?k ? K, sample 1 point in each stratum of partition Nk?
Main algorithm:
Compute Sk for each k ? K
Create partition Nk for each k ? K
Sample a point in ?k,i ? Nk for i ? Sk
Output: Return the estimate ?
?n computed when taking the ?rst point Xk,i in each sub-stratum ?k,i of
?Sk Xk,i
?
Nk , that is to say ?
?n = K
k=1 wk
i=1 Sk
? Nk , ?k,i and Sk are in the main text.
Figure 1: Pseudo-code of LMC-UCB. The de?nition of Nk? , S,
4.2 High probability lower bound on the number of sub-strata of stratum ?k
We ?rst state an assumption on the function f .
Assumption 1 The function f is such that ?f exists and ?x ? [0, 1]d , ||?f (x)||22 ? L.
The next Lemma states that with high probability, the number Sk of sub-strata of stratum ?k , in
which there is at least one point, adjusts ?almost? to the unknown optimal proportions.
Lemma 3 Let Assumption 1 be satis?ed and (?k )k?K be a partition in K hyper-cubic strata of
same measure. If n ? 4K, then with probability at least 1 ? ?, ?k, the number of sub-strata satis?es
?
?
?
?
?
1
d
1
3/2
?
d+1
d+1
Sk ? max ?K,k n ? 7(L + 1)d
log(K/?)(1 +
)K
n
,S .
?K
The proof of this result is in the Supplementary Material (Appendix C) (see also [2]).
6
4.3
Remarks
A sampling scheme that shares ideas with quasi Monte-Carlo methods: Algorithm LM C ?
U CB almost manages to divide each stratum ?k in ?K,k n hyper-cubic strata of same measure, each
one of them containing at least one sample. It is thus possible to build a learning procedure that, at
the same time, estimates the empirical proportions ?K,k , and allocates the samples proportionally to
them.
The error terms: There are two reasons why we are not able to divide exactly each stratum ?k
in ?K,k n hyper-cubic strata of same measure. The ?rst reason is that the true proportions ?K,k are
unknown, and that it is thus necessary to estimate them. The second reason is that we want to build
strata that are hyper-cubes of same measure. The number of strata Sk needs thus to be such that
1/d
Sk is an integer. We thus also loose ef?ciency because of rounding issues.
5
5.1
Main results
Asymptotic convergence of algorithm LMC-UCB
By just combining the result of Lemma 1 with the result of Lemma 3, it is possible to show that
algorithm LMC-UCB is asymptotically (when K goes to +? and n ? K) as ef?cient as the optimal
oracle strategy of Lemma 1.
Theorem 1 Assume that ?f is continuous, and that Assumption 1 is satis?ed. Let (?nk )n,k?Kn be
an arbitrary sequence of partitions such that all the strata are hyper-cubes,
that 4Kn ? n,
? such
? such
? d+1
?
1
2
2
= 0.
that the diameter of each strata goes to 0, and such that limn?+? n Kn log(Kn n )
The regret of LMC-UCB with parameter ?n = n12 on this sequence of partition, where for sequence
(?nk )n,k?Kn it disposes of n points, is such that
lim n1+2/d Rn (ALM C?U CB ) = 0.
n??
The proof of this result is in the Supplementary Material (Appendix D) (see also [2]).
5.2
Under a slightly stronger Assumption
We introduce the following Assumption, that is to say that f admits a Taylor expansion of order 2.
Assumption 2 f admits a Taylor expansion at the second order in any point a ? [0, 1]d and this
expansion is such that ?x, |f (x) ? f (a) ? ??f, (x ? a)?| ? M ||x ? a||22 where M is a constant.
This is a slightly stronger assumption than Assumption 1, since it imposes, additional to Assumption 1, that the variations?of ?f (x) are uniformly? bounded for any x ? [0, 1]d . Assumption 2 implies Assumption 1 since ?||?f (x)||2 ?||?f (0)||2 ? ? M ||x?0||2 , which implies that ||?f (x)||2 ?
?
?
||?f (0)||2 + M d. This implies in particular that we can consider L = ||?f (0)||2 + M d. We
however do not need M to tune the algorithm LMC-UCB, as long as we have access to L (although
M appears in the bound of next Theorem).
We can now prove a bound on the pseudo-regret.
Theorem 2 Under Assumptions 1 and 2, if n ? 4K, the estimate returned by algorithm LM C ?
U CB is such that, with probability 1 ? ?, we have
Rn (ALM C?U CB ) ?
1
n
d+2
d
?
?
?
? 1 ? 1 ??
1
1
3M d ?4 ?
d+1
M (L + 1)4 1 +
650d3/2 log(K/?)K d+1 n? d+1 + 25d
.
?
K
A proof of this result is in the Supplementary Material (Appendix E) (see also [2]).
Now we can choose optimally the number of strata so that we minimize the regret.
Theorem 3 Under Assumptions 1 and 2, the algorithm LM C ? U CB launched on Kn =
??
?d
( n)1/d hyper-cubic strata is such that, with probability 1 ? ?, we have
Rn (ALM C?U CB ) ?
1
2
1
n1+ d + 2(d+1)
?
?
?
3M d ?4 ?
log(n/?) .
700M (L + 1)4 d3/2 1 +
?
7
5.3
Discussion
Convergence of the algorithm LMC-UCB to the optimal
strategy: ?When the number
? oracle
? d+1
?
1
= 0, the pseudoof strata Kn grows to in?nity, but such that limn?+? n Kn log(Kn n2 ) 2
regret of algorithm LMC-UCB converges to 0. It means that this strategy is asymptotically as ef?cient as (the lower bound on) the optimal oracle strategy. When f admits a Taylor expansion at the
?rst order in every point, it is also possible to obtain a ?nite-time bound on the pseudo-regret.
A new sampling scheme: The algorithm LM C ? U CB samples the points in a way that takes
advantage of both strati?ed sampling and quasi Monte-Carlo. Indeed, LMC-UCB is designed to
cumulate (i) the advantages of quasi Monte-Carlo by spreading the samples in the domain and (ii) the
advantages of strati?ed, adaptive sampling by allocating more samples where the function has larger
variations. For these reasons, this technique is ef?cient on differentiable functions. We illustrate this
assertion by numerical experiments in the Supplementary Material (Appendix A) (see also [2]).
2
In high dimension: The bound on the pseudo-regret in Theorem 3 is of order n?1? d ?
1
poly(d)n? 2(d+1) . In order for the pseudo-regret to be negligible when compared to the opti2
mal oracle mean squared error of the estimate (which is of order n?1? d ) it is necessary that
1
poly(d)n? 2(d+1) is negligible compared to 1. In particular, this says that n should scale exponentially with the dimension d. This is unavoidable, since strati?ed sampling shrinks the approximation
error to the asymptotic oracle only if the diameter of each stratum is small, i.e. if the space is strati?ed
in every direction (and thus if n is exponential with d). However Uniform strati?ed Monte-Carlo,
also for the same reasons, shares this problem5 .
We emphasize however the fact that a (slightly modi?ed) version of our algorithm is more ef?cient
than crude Monte-Carlo, up to a negligible term that depends only of poly(log(d)). The bound in
Lemma 3 depends of poly(d) only because of rounding issues, coming from the fact that we aim
at dividing each?stratum ?k in hyper-cubic sub-strata. The whole budget is thus not completely
used, and only k Sk + K S? samples are collected. By modifying LMC-UCB so that it allocates
the remaining budget uniformly at random on the domain, it is possible to prove that the (modi?ed)
algorithm is always at least as ef?cient as crude Monte-Carlo.
Conclusion
This work provides an adaptive method for estimating the integral of a differentiable function f .
We ?rst proposed a benchmark for measuring ef?ciency: we proved that the asymptotic mean
1
squared error of the estimate outputted by the optimal oracle strategy is lower bounded by ? n1+2/d
.
We then proposed an algorithm called LMC-UCB, which manages to learn the amplitude of the variations of f , to sample more points where theses variations are larger, and to spread these points in a
way that is related to quasi Monte-Carlo sampling schemes. We proved that algorithm LMC-UCB
is asymptotically as ef?cient as the optimal, oracle strategy. Under the assumption that f admits a
Taylor expansion in each point, we provide also a ?nite time bound for the pseudo-regret of algorithm LMC-UCB. We summarize in Table 1 the rates and ?nite-time bounds for crude Monte-Carlo,
Uniform strati?ed Monte-Carlo and LMC-UCB. An interesting extension of this work would be to
Pseudo-Risk:
Sampling schemes
Rate
Asymptotic constant
+ Finite-time bound
?
?2
?
?
1
f (x) ? [0,1]d f (u)du dx
Crude MC
+0
n
[0,1]d ?
?
?
1
1
Uniform strati?ed MC
||?f (x)||22 dx
+O( 1+ 2d+ 1 )
2
12
[0,1]d
n1+ d
n d 2d
?2 (d+1)
??
11
d
d
1
1
LMC-UCB
(||?f (x)||2 ) d+1 dx
+O( 1+ 2d+2 1 )
1+ 2
12
[0,1]d
n
d
n
d
2(d+1)
Table 1: Rate of convergence plus ?nite time bounds for Crude Monte-Carlo, Uniform strati?ed
Monte Carlo (see Lemma 2) and LMC-UCB (see Theorems 1 and 3).
adapt it to ??H?older functions that admit a Riemann-Liouville derivative of order ?. We believe
that similar results could be obtained, with an optimal constant and a rate of order n1+2?/d .
Acknowledgements This research was partially supported by Nord-Pas-de-Calais Regional Council, French ANR EXPLO-RA (ANR-08-COSI-004), the European Communitys Seventh Framework
Programme (FP7/2007-2013) under grant agreement 270327 (project CompLACS), and by Pascal-2.
5
When d is very large and n is not exponential in d, then second order terms, depending on the dimension,
take over the bound in Lemma 2 (which is an asymptotic bound) and poly(d) appears in these negligible terms.
8
References
[1] J.Y. Audibert, R. Munos, and Cs. Szepesv?ari. Exploration-exploitation tradeoff using variance
estimates in multi-armed bandits. Theoretical Computer Science, 410(19):1876?1902, 2009.
[2] A. Carpentier and R. Munos. Adaptive Strati?ed Sampling for Monte-Carlo integration of Differentiable functions. Technical report, arXiv:0575985, 2012.
[3] A. Carpentier and R. Munos. Finite-time analysis of strati?ed sampling for monte carlo. In In
Neural Information Processing Systems (NIPS), 2011a.
[4] A. Carpentier and R. Munos. Finite-time analysis of strati?ed sampling for monte carlo. Technical report, INRIA-00636924, 2011b.
[5] Pierre Etor?e and Benjamin Jourdain. Adaptive optimal allocation in strati?ed sampling methods.
Methodol. Comput. Appl. Probab., 12(3):335?360, September 2010.
[6] P. Glasserman. Monte Carlo methods in ?nancial engineering. Springer Verlag, 2004. ISBN
0387004513.
[7] V. Grover. Active learning and its application to heteroscedastic problems. Department of
Computing Science, Univ. of Alberta, MSc thesis, 2009.
[8] A. Maurer and M. Pontil. Empirical bernstein bounds and sample-variance penalization. In
Proceedings of the Twenty-Second Annual Conference on Learning Theory, pages 115?124,
2009.
[9] H. Niederreiter. Quasi-monte carlo methods and pseudo-random numbers. Bull. Amer. Math.
Soc, 84(6):957?1041, 1978.
[10] R.Y. Rubinstein and D.P. Kroese. Simulation and the Monte Carlo method. Wiley-interscience,
2008. ISBN 0470177942.
9
| 4519 |@word exploitation:1 version:1 proportion:5 stronger:2 simulation:1 harder:1 ld:2 initial:2 contains:1 nally:2 tackling:1 dx:20 numerical:2 partition:26 shape:4 enables:1 etor:1 designed:1 accordingly:1 cult:1 xk:8 beginning:1 provides:3 math:1 wkd:1 become:1 prove:7 consists:1 advocate:1 interscience:1 introduce:3 alm:3 indeed:2 ra:1 expected:1 roughly:1 multi:1 riemann:1 alberta:1 glasserman:1 armed:1 provided:4 estimating:1 notation:1 bounded:2 project:1 what:1 xed:1 cm:1 substantially:1 pseudo:18 every:4 collecting:1 niederreiter:1 oscillates:1 exactly:1 k2:3 uk:4 grant:1 appear:1 before:2 negligible:4 stratifying:2 local:2 w1k:3 engineering:1 ak:4 approximately:1 inria:3 chose:1 might:1 plus:1 k:1 halley:1 appl:1 heteroscedastic:1 dif:1 limited:1 regret:11 procedure:1 nite:10 pontil:1 empirical:3 adapting:1 outputted:1 word:1 road:1 integrating:2 layered:2 risk:7 restriction:1 measurable:1 center:1 go:3 independently:1 survey:1 assigns:1 insight:1 adjusts:1 pull:1 notion:1 variation:20 n12:1 target:1 agreement:1 pa:1 mal:1 region:10 trade:1 intuition:5 benjamin:1 ideally:1 cam:1 solving:2 exposed:1 completely:1 univ:1 monte:34 rubinstein:1 hyper:27 quite:1 larger:7 supplementary:7 say:3 anr:2 online:1 advantage:4 differentiable:9 sequence:6 isbn:2 propose:1 coming:2 fr:1 relevant:1 combining:1 nity:1 adapts:1 competition:1 rst:13 convergence:4 converges:1 illustrate:1 depending:1 ac:1 dividing:1 soc:1 c:1 implies:4 come:1 direction:1 closely:1 modifying:1 centered:1 exploration:1 material:7 jourdain:1 assign:1 villeneuve:1 extension:1 considered:1 cb:9 lm:6 spreading:1 calais:1 council:1 create:2 always:3 aim:4 rather:1 stratus:42 attains:2 sense:1 realizable:2 initially:1 w:1 bandit:1 quasi:8 france:1 issue:4 among:1 pascal:1 integration:4 fairly:1 initialize:1 cube:10 equal:3 construct:3 having:1 sampling:28 lille:1 discrepancy:5 report:2 piecewise:2 modi:2 composed:1 lebesgue:2 n1:11 satis:3 evaluation:2 allocating:4 accurate:1 kt:1 integral:5 necessary:2 allocates:8 indexed:1 taylor:5 divide:5 maurer:1 re:3 theoretical:1 minimal:3 wb:1 assertion:1 measuring:1 bull:1 deviation:1 uniform:13 rounding:4 seventh:1 front:1 optimally:2 kn:9 chooses:1 stratum:119 off:1 complacs:1 kroese:1 squared:22 again:1 unavoidable:1 thesis:2 containing:1 choose:2 possibly:1 admit:1 derivative:1 return:4 de:12 wk:20 matter:1 explicitly:1 audibert:1 depends:2 linked:3 start:1 contribution:2 minimize:2 variance:6 accurately:1 manages:2 mc:2 carlo:34 cation:12 sharing:1 ed:30 proof:8 con:2 static:1 sampled:2 proved:2 subsection:4 knowledge:1 lim:3 organized:1 amplitude:1 actually:1 appears:3 higher:1 attained:1 improved:1 cumulate:1 amer:1 cosi:1 shrink:1 just:1 msc:1 lack:2 french:1 believe:1 grows:1 alexandra:1 building:1 true:1 deliberately:1 equality:1 assigned:1 laboratory:1 statslab:1 round:1 lmc:25 ef:16 ari:1 exponentially:1 refer:1 cambridge:1 tuning:1 similarly:1 had:1 access:5 europe:1 similarity:1 deduce:2 inf:3 verlag:1 nition:5 minimum:2 additional:1 impose:1 speci:2 ii:2 full:1 smooth:1 technical:2 adapt:4 af:1 long:1 plugging:1 heterogeneous:3 arxiv:1 szepesv:1 want:2 limn:3 allocated:1 w2:3 launched:1 regional:1 posse:1 call:5 integer:6 constraining:1 bernstein:1 easy:1 idea:3 avenue:1 knowing:1 tradeoff:1 allocate:4 wilberforce:1 stratify:5 returned:3 remark:1 useful:1 clear:1 proportionally:1 tune:1 amount:1 diameter:6 wisely:1 estimated:1 per:12 write:6 cb3:1 d3:2 carpentier:5 rectangle:1 asymptotically:5 everywhere:4 almost:6 throughout:1 reasonable:1 appendix:6 bound:30 layer:2 pay:1 nancial:1 oracle:22 annual:1 precisely:1 dence:2 dominated:1 emi:1 argument:1 ned:5 department:1 according:3 point2:1 ball:2 smaller:6 slightly:3 partitioned:1 wi:6 s1:1 ln:6 equation:2 montecarlo:1 loose:1 know:1 fp7:1 available:1 discretizes:1 pierre:1 assumes:1 remaining:1 restrictive:1 build:7 liouville:1 quantity:8 strategy:36 usual:1 exhibit:1 september:1 distance:1 considers:1 collected:2 reason:6 code:2 minimizing:2 nord:2 design:1 unknown:4 twenty:1 discretize:1 upper:3 benchmark:1 finite:3 rn:4 arbitrary:3 community:1 required:1 nip:1 able:1 summarize:1 built:2 max:2 natural:2 methodol:1 minimax:4 scheme:9 older:1 ne:5 ascq:1 nding:1 text:1 probab:1 l2:2 acknowledgement:1 determining:1 asymptotic:17 sk2:1 interesting:2 proportional:1 allocation:5 unadaptive:1 grover:1 penalization:1 imposes:1 share:4 supported:1 last:1 pulled:2 taking:1 munos:6 dimension:4 adaptive:11 programme:1 approximate:1 emphasize:1 sequentially:2 active:1 assumed:2 conclude:1 continuous:3 sk:38 why:3 table:2 learn:1 expansion:6 du:2 poly:5 european:1 domain:24 spread:3 main:4 whole:1 n2:1 allowed:2 fair:1 referred:1 cient:14 cubic:17 wiley:1 sub:44 wish:1 deterministically:1 ciency:2 exponential:2 comput:1 crude:8 learns:1 theorem:7 admits:5 exists:1 budget:6 conditioned:1 nk:21 gap:1 fatou:1 surprise:1 forget:1 remi:1 partially:1 springer:1 lipschitz:2 price:1 reducing:1 uniformly:8 lemma:23 called:2 total:1 cumulates:1 e:6 ucb:25 explo:1 |
3,888 | 452 | Obstacle Avoidance through Reinforcement
Learning
Tony J. Prescott and John E. W. Maybew
Artificial Intelligence and Vision Research Unit.
University of Sheffield. S 10 2TN. England.
Abstract
A method is described for generating plan-like. reflexive. obstacle
avoidance behaviour in a mobile robot. The experiments reported here
use a simulated vehicle with a primitive range sensor. Avoidance
behaviour is encoded as a set of continuous functions of the perceptual
input space. These functions are stored using CMACs and trained by a
variant of Barto and Sutton's adaptive critic algorithm. As the vehicle
explores its surroundings it adapts its responses to sensory stimuli so
as to minimise the negative reinforcement arising from collisions.
Strategies for local navigation are therefore acquired in an explicitly
goal-driven fashion. The resulting trajectories form elegant collisionfree paths through the environment
1 INTRODUCTION
Following Simon's (1969) observation that complex behaviour may simply be the
reflection of a complex environment a number of researchers (eg. Braitenberg 1986.
Anderson and Donath 1988. Chapman and Agre 1987) have taken the view that
interesting, plan-like behaviour can emerge from the interplay of a set of pre-wired
reflexes with regularities in the world. However, the temporal structure in an agent's
interaction with its environment can act as more than just a trigger for fixed reactions.
Given a suitable learning mechanism it can also be exploited to generate sequences of new
responses more suited to the problem in hand. Hence, this paper attempts to show that
obstacle avoidance. a basic level of navigation competence, can be developed through
learning a set of conditioned responses to perceptual stimuli.
In the absence of a teacher a mobile robot can evaluate its performance only in terms of
final outcomes. A negative reinforcement signal can be generated each time a collision
occurs but this information te]]s the robot neither when nor how. in the train of actions
preceding the crash, a mistake was made. In reinforcement learning this credit assignment
523
524
Prescott and Mayhew
problem is overcome by forming associations between sensory input patterns and
predictions of future outcomes. This allows the generation of internal "secondary
reinforcement" signals that can be used to select improved responses. Several authors
have discussed the use of reinforcement learning for navigation, this research is inspired
primarily by that of Barto, Sutton and co-workers (1981, 1982, 1983, 1989) and Werbos
(1990). The principles underlying reinforcement learning have recently been given a firm
mathematical basis by Watkins (1989) who has shown that these algorithms are
implementing an on-line, incremental, approximation to the dynamic programming
method for detennining optimal control. Sutton (1990) has also made use of these ideas
in fonnulating a novel theory of classical conditioning in animal learning.
We aim to develop a reinforcement learning system that will allow a simple mobile robot
with minimal sensory apparatus to move at speed around an indoor environment avoiding
collisions with stationary or slow moving obstacles. This paper reports preliminary
results obtained using a simulation of such a robot.
2 THE ROBOT SIMULATION
Our simulation models a three-wheeled mobi1e vehicle, called the 'sprite', operating in a
simple two-dimensional world (500x500 cm) consisting of walls and obstacles in which
the sprite is represented by a square box (30x30 cm). Restrictions on the acceleration and
the braking response of the vehicle model enforce a degree of realism in its ability to
initiate fast avoidance behaviour. The perceptual system simulates a laser range-finder
giving the logarithmically scaled distance to the nearest obstacle at set angles from its
current orientation. An important feature of the research has been to explore the extent
to which spatially sparse but frequent data can support complex behaviour. We show
below results from simulations using only three rays emitted at angles _60?, 0?, and +60?.
The controller operates directly on this unprocessed sensory input. The continuous
trajectory of the vehicle is approximated by a sequence of discrete time steps. In each
interval the sprite acquires new perceptual data then performs the associated response
generating either a change in position or a feedback signal indicating that a collision has
occured preventing the move. After a collision the sprite reverses slightly then attempts
to rotate and move off at a random angle (90-180? from its original heading), if this is not
possible it is relocated to a random starting position.
3 LEARNING ALGORITHM
The sprite learns a multi-parameter policy (IT) and an evaluation (V). These functions
are stored using the CMAC coarse-coding architecture (Albus 1971), and updated by a
reinforcement learning algorithm similar to that described by Watkins (1989). The action
functions comprising the policy are acquired as gaussian probability distributions using
the method proposed by Williams (1988). The following gives a brief summary of the
algorithm used.
Let Xt be the perceptual input pattern at time t and rt the external reward, then the
reinforcement learning error (see Barto et aI., 1989) is given by
f: t +l = rt+l + yYt(X1+l) - \{(Xt)
(1)
where y is a constant (0 < y < 1). This error is used to adjust V and IT by gradient
descent ie.
Obstacle Avoidance through Reinforcement Learning
\{+ 1 (x) = \{ (x) + a. Et+ 1 mt (x) and
(2)
111 +1 (x) = Ilt (x) + ~ Et+ 1 ndx)
(3)
where a. and ~ are learning rates and mt(x) and nt(x) are the evaluation and policy
eligibility traces for pattern x. The eligibility traces can be thought of as activity in
short-term memory that enables learning in the L TM store. The minimum STM
requirement is to remember the last input pattern and the exploration gradient ~at of the
last action taken (explained below), hence
ml+l(x) = 1 and nl+l(x) = ~3t iff x is the current pattern,
ml+l(x) = nl+l(x) = 0 otherwise.
(4)
Learning occurs faster, however, if the memory trace of each pattern is allowed to decay
slowly over time with strength of activity being related to recency. Hence, if the rate of
decay is given by A (0 <= A <= 1) then for patterns other than the current one
ml+1 (x) = A mt (x) and nl +1 (x) =A nt (x).
Using a decay rate of less than 1.0 the eligibility trace for any input becomes negligible
within a short time, so in practice it is only necessary to store a list of the most recent
patterns and actions (in our simulations only the last four values are stored).
The policy acquired by the learning system has two elements (f and '6) corresponding to
the desired forward and angular velocities of the vehicle. Each element is specified by a
gaussian pdf and is encoded by two adjustable parameters denoting its mean and standard
deviation (hence the policy as a whole consists of four continuous functions of the input).
In each time-step an action is chosen by selecting randomly from the two distributions
associated with the current input pattern.
In order to update the policy the exploratory component of the action must be computed,
this consists of a four-vector with two values for each gaussian element. Following
Williams we define a standard gaussian density function g with parameters J.1 and 0' and
output y such that
g(y, J.1, 0') = 2 ~(J e-
(Y - IL)2
202
the derivatives of the mean and standard deviation 1 are then given by
y - J.1
[(y - J.1)2 - a2]
~J.1 = and
~O' = - - - 0'2
0'3
The exploration gradient of the action as a whole is therefore the vector
~3t = [~J.1f' ~O'f, ~J.1t'), ~o't'}].
(5)
(6)
The four policy functions and the evaluation function are each stored using a CMAC
table. This technique is a form of coarse-coding whereby the euclidean space in which a
function lies is divided into a set of overlapping but offset tilings. Each tiling consists
of regular regions of pre-defined size such that all points within each region are mapped to
a single stored parameter. The value of the function at any point is given by the average
of the parameters stored for the corresponding regions in all of the tHings. In our
1In practice we use (In s) as the second adjustable parameter to ensure that the standard
deviation of the gaussian never has a negative value (see Williams 1988 for details).
525
526
Prescott and Mayhew
simulation each sensory dimension is quantised into five discrete bins resulting in a
5X5X5 tiling, five tilings are overlaid to form each CMAC. If the input space is
enlarged (perhaps by adding further sensors) the storage requirements can be reduced by
using a hashing function to map all the tiles onto a smaller number of parameters. This
is a useful economy when there are large areas of the state space that are visited rarely or
not at all.
4 EXPLORATION
In order for the sprite to learn useful obstacle avoidance behaviour it has to move around
and explore its environment. If the sprite is rewarded simply for avoiding collisions an
optimal strategy would be to remain still or to stay within a small, safe, circular orbit.
Therefore to force the sprite to explore its world a second source of reinforcement is used
which is a function of its current forward velocity and encourages it to maintain an
optimal speed. To further promote adventurous behaviour the initial policy over the
whole state-space is for the sprite to have a positive speed. A system which has a high
initial expectation of future rewards will settle less rapidly for a locally optimal solution
than a one with a low expectation. Therefore the value function is set initially to the
maximum reward attainable by the sprite.
Improved policies are found by deviating from the currently preferred set of actions.
However, there is a trade-off to be made between exploiting the existing policy to
maximise the short term reward and experimenting with untried actions that have
potentially negative consequences but may eventually lead to a better policy. This
suggests that an annealing process should be applied to the degree of noise in the policy.
In fact, the algorithm described above results in an automatic annealing process (Williams
88) since the variance of each gaussian element decreases as the mean behaviour converges
to a local maximum. However, the width of each gaussian can also increase, if the mean
is locally sub-optimal, allowing for more exploratory behaviour. The final width of the
gaussian depends on whether the local peak in the action function is narrow or flat on top.
The behaviour acquired by the system is therefore more than a set of simple reflexes.
Rather, for each circumstance, there is a range of acceptable actions which is narrow if the
robot is in a tight corner, where its behaviour is severely constrained, but wider in more
open spaces.
5 RESULTS
To test the effectiveness of the learning algorithm the performance of the sprite was
compared before and after fifty-thousand training steps on a number of simple
environments. Over 10 independent runs 2 in the first environment shown in figure one
the average distance travelled between collisions rose from approximately O.9m (lb)
before learning to 47.4m (Ic) after training. At the same time the average velocity more
than doubled to just below the optimal speed. The requirement of maintaining an
optimum speed encourages the sprite to follow trajectories that avoid slowing down,
stopping or reversing. However, if the sprite is placed too close to an obstacle to turn
away safely, it can perform an n-point-turn manoeuvre requiring it to stop, back-off, turn
and then move forward. It is thus capable of generating quite complex sequences of
actions.
2Each measure was calculated over a sequence of five thousand simulation-steps with learning
disabled.
Obstacle Avoidance through Reinforcement Learning
a) Robot casting three rays.
...:........ .
~=:;;:-........
.......'.c.).'.'.'.'.
: ?? /
~
'"(' .:.: ???=-:v:':. ' '?'?".t.'.
~v?:???
8\ii.:: . '. ~:(~
"':...,.? ?, . . "J'
???? "0
. .: .
?"'. . ..".=. '.
b) Trajectories before training ...
."
.
'
"
~t'~e.;.r-:
....
'.i:'''''''
"0
.:
? ...~. ~I
.. \. .
: : . , "0
""
or
J.:.~'#tr.'; _"~
'.
". I .
.;\!~
v~?
"0 .:,:: ... :....,.?? 1?
?,..~?
V?
.?.. .. I/?
("'~ ;'
~
::.
.,. :.:",1::--. i
~
I
:,:!
:(~:f!
,'
'.. :
? :
? :':l,;
."? ?'
../...t'..;,~:..,..... ~y.
.~.,.
'./L".
:;
? " 0'"
.~..:.~I::~. :....
~
:
... _ ????~. ????
{t . .?f.:"
!r
,.
-0"
'.
?
"0
.~ ~::. . .
~-..~"::"
0 . . . . . . . 0?
::~
???: ?? ~J-rJ ?? : ' y .. ..
"0
? ?'
... ........... ..
? ~;~:: :
?.... It": 0:
':~'.
''':r'"'.
~
":
'111.".A,~~"::
o?Jtl: ?
c) ... after training ...
.
: '
.... :
"::
.~: ~"-~- . '.
vA.:J
?:i~?????
,--reo
.......J. ? ? ~.'
~.
10:'
' ??? ."
:
? ????? '
d) ... and in a novel environment
Figure One: Sample Paths from the Obstacle Avoidance Simulation.
The trajectories show the robot's movement over two thousand simulation steps before
and after training. After a collision the robot reverses slightly then rotates to move off
at a random angle 90-180 0 from its original heading, if this is not possible it is relocated
to a random position. Crosses indicate locations where collisions occured, circles show
new starting positions.
527
528
Prescott and Mayhew
Some differences have been found in the sprite's ability to negotiate different
environments with the effectiveness of the avoidance learning system varying for different
configurations of obstacles. However, only limited performance loss has been observed
in transferring from a learned environment to an unseen one (eg. figure Id), which is
quickly made up if the sprite is allowed to adapt its strategies to suit the new
circumstances. Hence we are encouraged to think that the learning system is capturing
some fairly general strategies for obstacle avoidance.
The different kinds of tactical behaviour acquired by the sprite can be illustrated using
three dimensional slices through the two policy functions (desired forward and angular
velocities). Figure two shows samples of these functions recorded after fifty thousand
training steps in an environment containing two slow moving rectangular obstacles.
Each graph is a function of the three rays cast out by the sprite: the x and y axes show the
depths of the left and right rays and the vertical slices correspond to different depths of the
central ray (9, 35 and 74cm). The graphs show clearly several features that we might
expect of effective avoidance behaviour. Most notably, there is a transition occuring over
the three slices during which the policy changes from one of braking then reversing
(graph a) to one of turning sharply (d) whilst maintaining speed or accelerating (e). This
transition clearly corresponds to the threshold below which a collision cannot be avoided
by swerving but requires backing-off instead. There is a considerable degree of left-right
symmetry (reflection along the line left-ray=right-ray) in most of the graphs. This agrees
with the observation that obstacle avoidance is by and large a symmetric problem.
However some asymmetric behaviour is acquired in order to break the deadlock that arises
when the sprite is faced with obstacles that are equidistant on both sides.
6
CONCLUSION
We have demonstrated that complex obstacle avoidance behaviour can arise from
sequences of learned reactions to immediate perceptual stimu1i. The trajectories generated
often have the appearance of planned activity since individual actions are only appropriate
as part of extended patterns of movement. However, planning only occurs as an implicit
part of a learning process that allows experience of rewarding outcomes to be propagated
backwards to influence future actions taken in similar contexts. This learning process is
effective because it is able to exploit the underlying regularities in the robot's interaction
with its world to find behaviours that consistently achieve its goals.
Acknowledgements
This work was supported by the Science and Engineering Research Council.
References
Albus, J.S., (1971) A theory of cerebellar function. Math Biosci 10:25-61.
Anderson, T.L., and Donath, M. (1988a) Synthesis of reflexive behaviour for a mobHe
robot based upon a stimulus-response paradigm. SPIE Mobile Robots III, 1007:198210.
Anderson, T.L., and Donath, M. (1988b) A computational structure for enforcing reactive
behaviour in a mobile robot. SPIE Mobile Robots III 1007:370-382.
Barto, A.G., Sutton, R.S., and Brouwer, P.S. (1981) Associative search network: A
reinforcement learning associative memory". Biological Cybernetics 40:201-211.
Obstacle Avoidance through Reinforcement Learning
Angular Velocity
Forward Velocity
+15cm
o
o
a) centre 9cm
b) centre gem
+15cm
d) 35cm
c) 35em
+15cm
e) 74cm
f) 74cm
Figure Two: Surfaces showing action policies for depth measures
for the central ray of 9, 35 and 74 cm.
529
530
Prescott and Mayhew
Barto, A.G., Anderson, C.W., and Sutton, R.S.(1982) Synthesis of nonlinear control
surfaces by a layered associative search network. Biological Cybernetics 43: 175-185.
Barto, A.G., Sutton, R.S., Anderson, C.W. (1983) Neuronlike adaptive elements that can
solve difficult learning control problems. IEEE Transactions on Systems, Man, and
Cybernbetics SMC-13:834-846.
Barto, A.G., Sutton, R.S., and Watkins, CJ.H.C (1989) Learning and sequential decision
making. COINS technical report.
Braitenberg, V (1986) Vehicles: experiments in synthetic psychology, MIT Press,
Cambridge, MA.
Chapman, D. and Agre, P.E. (1987) Pengi: An implementation of a theory of activity.
AAAI-87.
Simon, H.A. (1969) The sciences of the artificial. MIT Press, Cambridge, Massachusetts.
Sutton, R.S. and Barto, A.G. (1990) Time-deriviative models of pavlovian reinforcement.
in Moore, J.W., and Gabriel, M. (eds.) Learning and Computational Neuroscience.
MIT Press, Cambridge, MA.
Watkins, CJ.H.C (1989) Learning from delayed rewards. Phd thesis, King's College.
Cambridge University, UK.
Werbos, PJ. (1990) A menu of designs for reinforcement learning over time. in Millet,
III, W.T., Sutton, R.S. and Werbos, PJ. Neural networks for control, MIT Press,
Cambridge, MA.
Williams RJ., (1988) Towards a theory of reinforcement-learning connectionist systems.
Technical Report NV-CCS-88-3, College of Computer Science, Northeastern
University, Boston, MA.
| 452 |@word open:1 simulation:9 attainable:1 tr:1 initial:2 configuration:1 selecting:1 denoting:1 reaction:2 existing:1 current:5 nt:2 must:1 john:1 enables:1 update:1 stationary:1 intelligence:1 deadlock:1 slowing:1 realism:1 short:3 coarse:2 math:1 location:1 five:3 mathematical:1 along:1 consists:3 ray:8 acquired:6 notably:1 nor:1 planning:1 multi:1 inspired:1 becomes:1 stm:1 underlying:2 cm:11 kind:1 developed:1 whilst:1 temporal:1 remember:1 safely:1 act:1 scaled:1 uk:1 control:4 unit:1 positive:1 negligible:1 maximise:1 local:3 engineering:1 apparatus:1 mistake:1 consequence:1 severely:1 sutton:9 id:1 path:2 approximately:1 might:1 suggests:1 co:1 limited:1 smc:1 range:3 practice:2 cmac:3 area:1 thought:1 pre:2 prescott:5 regular:1 doubled:1 onto:1 close:1 cannot:1 layered:1 recency:1 storage:1 influence:1 context:1 restriction:1 map:1 demonstrated:1 primitive:1 williams:5 starting:2 rectangular:1 avoidance:15 menu:1 exploratory:2 updated:1 trigger:1 programming:1 logarithmically:1 element:5 approximated:1 velocity:6 werbos:3 asymmetric:1 observed:1 x500:1 thousand:4 region:3 trade:1 decrease:1 movement:2 rose:1 environment:11 reward:5 dynamic:1 trained:1 tight:1 upon:1 basis:1 represented:1 laser:1 train:1 fast:1 effective:2 artificial:2 outcome:3 firm:1 quite:1 encoded:2 solve:1 otherwise:1 ability:2 unseen:1 think:1 final:2 associative:3 untried:1 interplay:1 sequence:5 interaction:2 frequent:1 rapidly:1 iff:1 achieve:1 adapts:1 albus:2 exploiting:1 regularity:2 requirement:3 optimum:1 negotiate:1 wired:1 generating:3 incremental:1 converges:1 wider:1 develop:1 nearest:1 mayhew:4 indicate:1 revers:2 safe:1 exploration:3 settle:1 implementing:1 bin:1 behaviour:19 wall:1 preliminary:1 biological:2 around:2 credit:1 ic:1 wheeled:1 overlaid:1 a2:1 currently:1 visited:1 council:1 agrees:1 mit:4 clearly:2 sensor:2 gaussian:8 aim:1 rather:1 avoid:1 mobile:6 barto:8 casting:1 varying:1 ax:1 consistently:1 experimenting:1 economy:1 stopping:1 transferring:1 initially:1 comprising:1 backing:1 orientation:1 plan:2 animal:1 constrained:1 fairly:1 never:1 chapman:2 encouraged:1 promote:1 future:3 braitenberg:2 report:3 stimulus:3 connectionist:1 primarily:1 surroundings:1 randomly:1 individual:1 delayed:1 deviating:1 consisting:1 maintain:1 suit:1 attempt:2 neuronlike:1 circular:1 evaluation:3 adjust:1 navigation:3 nl:3 capable:1 worker:1 necessary:1 experience:1 euclidean:1 desired:2 orbit:1 circle:1 minimal:1 obstacle:18 planned:1 before:4 assignment:1 reflexive:2 deviation:3 too:1 reported:1 stored:6 teacher:1 synthetic:1 density:1 explores:1 peak:1 ie:1 stay:1 off:5 rewarding:1 synthesis:2 travelled:1 quickly:1 thesis:1 central:2 recorded:1 aaai:1 containing:1 slowly:1 tile:1 external:1 corner:1 derivative:1 coding:2 tactical:1 explicitly:1 depends:1 vehicle:7 view:1 break:1 simon:2 square:1 il:1 variance:1 who:1 correspond:1 trajectory:6 researcher:1 cybernetics:2 cc:1 ed:1 associated:2 spie:2 propagated:1 stop:1 massachusetts:1 occured:2 yyt:1 cj:2 back:1 hashing:1 follow:1 response:7 improved:2 box:1 anderson:5 just:2 angular:3 implicit:1 hand:1 ndx:1 nonlinear:1 overlapping:1 perhaps:1 disabled:1 requiring:1 hence:5 spatially:1 symmetric:1 moore:1 illustrated:1 eg:2 during:1 width:2 eligibility:3 encourages:2 acquires:1 whereby:1 pdf:1 occuring:1 tn:1 performs:1 reflection:2 novel:2 recently:1 x5x5:1 mt:3 detennining:1 conditioning:1 association:1 discussed:1 braking:2 biosci:1 cambridge:5 ai:1 relocated:2 automatic:1 centre:2 swerving:1 moving:2 robot:15 operating:1 surface:2 recent:1 driven:1 rewarded:1 store:2 exploited:1 minimum:1 preceding:1 paradigm:1 signal:3 ii:1 rj:2 technical:2 faster:1 england:1 adapt:1 cross:1 divided:1 finder:1 va:1 prediction:1 variant:1 basic:1 sheffield:1 controller:1 vision:1 expectation:2 circumstance:2 cerebellar:1 crash:1 interval:1 annealing:2 source:1 donath:3 fifty:2 nv:1 elegant:1 simulates:1 thing:1 effectiveness:2 emitted:1 backwards:1 iii:3 equidistant:1 psychology:1 architecture:1 idea:1 tm:1 minimise:1 unprocessed:1 whether:1 accelerating:1 sprite:18 action:15 gabriel:1 useful:2 collision:10 locally:2 reduced:1 generate:1 neuroscience:1 arising:1 discrete:2 four:4 threshold:1 neither:1 pj:2 graph:4 quantised:1 angle:4 run:1 ilt:1 decision:1 acceptable:1 capturing:1 activity:4 strength:1 sharply:1 flat:1 speed:6 pavlovian:1 smaller:1 slightly:2 remain:1 em:1 making:1 explained:1 taken:3 turn:3 eventually:1 mechanism:1 initiate:1 tiling:4 away:1 enforce:1 appropriate:1 coin:1 original:2 top:1 tony:1 ensure:1 brouwer:1 maintaining:2 exploit:1 giving:1 classical:1 move:6 occurs:3 strategy:4 rt:2 gradient:3 distance:2 mapped:1 simulated:1 rotates:1 extent:1 enforcing:1 difficult:1 potentially:1 trace:4 negative:4 implementation:1 design:1 policy:15 adjustable:2 perform:1 allowing:1 vertical:1 observation:2 descent:1 immediate:1 extended:1 lb:1 competence:1 cast:1 specified:1 learned:2 narrow:2 able:1 below:4 pattern:10 indoor:1 reo:1 memory:3 suitable:1 force:1 turning:1 brief:1 faced:1 acknowledgement:1 loss:1 expect:1 interesting:1 generation:1 agent:1 degree:3 principle:1 critic:1 summary:1 placed:1 last:3 supported:1 heading:2 side:1 allow:1 emerge:1 sparse:1 slice:3 overcome:1 feedback:1 dimension:1 world:4 calculated:1 depth:3 transition:2 sensory:5 author:1 made:4 reinforcement:18 adaptive:2 preventing:1 forward:5 avoided:1 transaction:1 preferred:1 ml:3 gem:1 continuous:3 search:2 table:1 learn:1 symmetry:1 complex:5 adventurous:1 whole:3 noise:1 arise:1 allowed:2 x1:1 enlarged:1 fashion:1 slow:2 sub:1 position:4 lie:1 perceptual:6 watkins:4 learns:1 northeastern:1 down:1 xt:2 showing:1 list:1 decay:3 offset:1 adding:1 sequential:1 phd:1 te:1 conditioned:1 x30:1 boston:1 suited:1 simply:2 explore:3 appearance:1 forming:1 reflex:2 corresponds:1 ma:4 goal:2 cmacs:1 king:1 acceleration:1 towards:1 absence:1 considerable:1 change:2 man:1 operates:1 reversing:2 called:1 secondary:1 indicating:1 select:1 rarely:1 college:2 internal:1 support:1 rotate:1 arises:1 reactive:1 evaluate:1 avoiding:2 |
3,889 | 4,520 | Mandatory Leaf Node Prediction in
Hierarchical Multilabel Classification
Wei Bi
James T. Kwok
Department of Computer Science and Engineering
Hong Kong University of Science and Technology
Clear Water Bay, Hong Kong
{weibi,jamesk}@cse.ust.hk
Abstract
In hierarchical classification, the prediction paths may be required to always end
at leaf nodes. This is called mandatory leaf node prediction (MLNP) and is particularly useful when the leaf nodes have much stronger semantic meaning than
the internal nodes. However, while there have been a lot of MLNP methods in hierarchical multiclass classification, performing MLNP in hierarchical multilabel
classification is much more difficult. In this paper, we propose a novel MLNP
algorithm that (i) considers the global hierarchy structure; and (ii) can be used on
hierarchies of both trees and DAGs. We show that one can efficiently maximize
the joint posterior probability of all the node labels by a simple greedy algorithm.
Moreover, this can be further extended to the minimization of the expected symmetric loss. Experiments are performed on a number of real-world data sets with
tree- and DAG-structured label hierarchies. The proposed method consistently
outperforms other hierarchical and flat multilabel classification methods.
1
Introduction
In many real-world classification problems, the output labels are organized in a hierarchy. For
example, gene functions are arranged in a tree in the Functional Catalog (FunCat) or as a directed
acyclic graph (DAG) in the Gene Ontology (GO) [1]; musical signals are organized in an audio
taxonomy [2]; and documents in the Wikipedia hierarchy. Hierarchical classification algorithms,
which utilize these hierarchical relationships between labels in making predictions, often lead to
better performance than traditional non-hierarchical (flat) approaches.
In hierarchical classification, the labels associated with each pattern can be on a path from the root
to a leaf (full-path prediction); or stop at an internal node (partial-path prediction [3]). Following
the terminology in the recent survey [4], when only full-path predictions are allowed, it is called
mandatory leaf node prediction (MLNP); whereas when partial-path predictions are also allowed,
it is called non-mandatory leaf node prediction (NMLNP). Depending on the application and how
the label hierarchy is generated, either one of these prediction modes may be more relevant. For
example, in the taxonomies of musical signals [2] and genes [5], the leaf nodes have much stronger
semantic/biological meanings than the internal nodes, and MLNP is more important. Besides, sometimes the label hierarchy is learned from the data, using methods like hierarchical clustering [6],
Bayesian network structure learning [7] and label tree methods [8, 9]. In these cases, the internal
nodes are only artificial, and MLNP is again more relevant. In the recent Second Pascal Challenge
on Large-scale Hierarchical Text Classification, the tasks also require MLNP.
In this paper, we focus on hierarchical multilabel classification (HMC), which differs from hierarchical multiclass classification in that the labels of each pattern can fall on a union of paths in the
hierarchy [10]. An everyday example is that a document/image/song/video may have multiple tags.
Because of its practical significance, HMC has been extensively studied in recent years [1,3,10?12].
1
While there have been a lot of MLNP methods in hierarchical multiclass classification [4], none of
these can be easily extended for the more difficult HMC setting. They all rely on training a multiclass
classifier at each node, and then use a recursive strategy to predict which subtree to pursue at the next
lower level. In hierarchical multiclass classification, exactly one subtree is to be pursued; whereas
in HMC, one has to decide at each node how many and which subtrees to pursue. Even when this
can be performed (e.g., by adjusting the classification threshold heuristically), it is difficult to ensure
that all the prediction paths will end at leaf nodes, and so a lot of partial paths may be resulted.
Alternatively, one may perform MLNP by first predicting the number of leaf labels (k) that the test
pattern has, and then pick the k leaf labels whose posterior probabilities are the largest. Prediction of
k can be achieved by using the MetaLabeler [13], though this involves another, possibly non-trivial,
learning task. Moreover, the posterior probability computed at each leaf l corresponds to a single
prediction path from the root to l. However, the target multilabel in HMC can have multiple paths.
Hence, a better approach is to compute the posterior probabilities of all subtrees/subgraphs that
have
k leaf nodes; and then pick the one with the largest probability. However, as there are N
k such
possible subsets (where N is the number of leafs), this can be expensive when N is large.
Recently, Cerri et al. [14] proposed the HMC-label-powerset (HMC-LP), which is specially designed for MLNP in HMC. Its main idea is to reduce the hierarchical problem to a non-hierarchical
problem by running the (non-hierarchical) multilabel classification method of label-powerset [15]
at each level of the hierarchy. However, this significantly increases the number of ?meta-labels?,
making it unsuitable for large hierarchies. Moreover, as it processes the hierarchy level-by-level,
this cannot be applied on DAGs, where ?levels? are not well-defined.
In this paper, we propose an efficient algorithm for MLNP in both tree-structured and DAGstructured hierarchical multilabel classification. The target multilabel is obtained by maximizing
the posterior probability among all feasible multilabels. By adopting a weak ?nested approximation? assumption, we show that the resultant optimization problem can be efficiently solved by a
greedy algorithm. Empirical results also demonstrate that this ?nested approximation? assumption
holds in general. The rest of this paper is organized as follows. Section 2 describes the proposed
framework for MLNP on tree-structured hierarchies, which is then extended to DAG-structured hierarchies in Section 3. Experimental results are presented in Section 4, and the last section gives
some concluding remarks.
2
Maximum a Posteriori MLNP on Label Trees
In this section, we assume that the label hierarchy is a tree T . With a slight abuse of notation,
we will also use T to denote the set of all the tree nodes, which are indexed from 0 (for the root),
1, 2, . . . , N . Let the set of leaf nodes in T be L. For a subset A ? T , its complement is denoted by
Ac = T \A. For a node i, denote its parent by pa(i), and its set of children by child(i). Moreover,
given a vector y, yA is the subvector of y with indices from A.
In HMC, we are given a set of training examples {(x, y)}, where x is the input and y =
[y0 , . . . , yN ]0 ? {0, 1}N +1 is the multilabel denoting memberships of x to each of the nodes. Equivalently, y can be represented by a set ? ? T , such that yi = 1 if i ? ?; and 0 otherwise. For y (or
?) to respect the tree structure, we require that yi = 1 ? ypa(i) = 1 for any non-root node i ? T .
In this paper, we assume that for any group of siblings {i1 , i2 , . . . , im }, their labels are conditionally independent given the label of their parent pa(i1 ) and x, i.e., p(yi1 , yi2 , . . . yim |ypa(i1 ) , x) =
Qm
j=1 p(yij |ypa(i1 ) , x). This simplification is standard in Bayesian networks and also commonly
used in HMC [16, 17]. By repeated application of the probability product rule, we have
Y
p(y0 , . . . , yN |x) = p(y0 |x)
p(yi | ypa(i) , x).
(1)
i?T \{0}
2.1
Training
With the simplification in (1), we only need to train estimators for p(yi = 1 | ypa(i) = 1, x), i ?
T \{0}. The algorithms to be proposed are independent of the way these probability estimators are
learned. In the experiments, we train a multitask lasso model for each group of sibling nodes, using
those training examples that their shared parent is labeled positive.
2
2.2
Prediction
For maximum a posteriori MLNP of a test pattern x, we want to find the multilabel ?? that (i)
maximizes the posterior probability in (1); and (ii) respects T . Suppose that it is also known that x
has k leaf labels. The prediction task is then:
?? =
p(y? = 1, y?c = 0 | x)
y0 = 1, k of the leaves in L are labeled 1,
? contains no partial path,
all yi ?s respect the label hierarchy.
max?
s.t.
(2)
(3)
Note that p(y? = 1, y?c = 0 | x) considers all the node labels in the hierarchy simultaneously.
In contrast, as discussed in Section 1, existing MLNP methods in hierarchical multiclass/multilabel
classification only considers the hierarchy information locally at each node.
Associate an indicator function ? : T ? {0, 1}N +1 with ?, such that ?i ? ?(i) = 1 if i ? ?, and
0 otherwise. The following Proposition shows that (2) can be written as an integer linear program.
Proposition 1. For a label tree, problem (2) can be rewritten as
X
max?
w i ?i
(4)
i?T
s.t.
X
?i = k, ?0 = 1, ?i ? {0, 1} ?i ? T ,
i?L
X
?j ? 1 ?i ? Lc : ?i = 1,
j?child(i)
?i ? ?pa(i) ?i ? T \{0},
(5)
where
? P
i=0
?
l?child(i) log(1 ? pl )
log pi ? log(1 ? pi ) P
i?L
wi =
,
? log p ? log(1 ? p ) +
c
i
i
l?child(i) log(1 ? pl ) i ? L \{0}
(6)
and pi ? p(yi = 1 | ypa(i) = 1, x).
Problem (4) has |L|
candidate solutions, which can be expensive to solve when T is large. In
k
the following, we will extend the nested approximation property (NAP), first introduced in [18] for
model-based compressed sensing, to constrain the optimal solution.
Definition 1 (k-leaf-sparse). A multilabel y is k-leaf-sparse if k of the leaf nodes are labeled one.
Definition 2 (Nested Approximation Property (NAP)). For a pattern x, let its optimal k-leaf-sparse
multilabel be ?k . The NAP is satisfied if {i : i ? ?k } ? {i : i ? ?k0 } for all k < k 0 .
Note that NAP is often implicitly assumed in many HMC algorithms. For example, consider the
common approach that trains a binary classifier at each node and recursively predicts from the root to
the subtrees. When the classification threshold at each node is high, prediction stops early; whereas
when the threshold is lowered, prediction can go further down the hierarchy. Hence, nodes that
are labeled positive at a high threshold will always be labeled at a lower threshold, implying NAP.
Another example is the CSSA algorithm in [11]. Since it is greedy, a larger solution (with more
labels predicted positive) always includes the smaller solutions.
Algorithm 1 shows the proposed algorithm, which will be called MAS (MAndatory leaf node prediction on Structures). Similar to [11], Algorithm 1 is also greedy and based on keeping track of the
supernodes. However, the definition of a supernode and its updating are different. Each node i ? T
is associated with the weight wi in (6). Initially, only the root is selected (?0 = 1). For each leaf l
in L, we create a supernode, which is a subset in T containing all the nodes on the path from l to the
root. Given |L| leaves in T , there are initially |L| supernodes. Moreover, all of them are unassigned
(i.e., each contains an unselected
leaf node). Each supernode S has a supernode value (SNV) which
P
is defined as SNV(S) = i?S wi .
3
Algorithm 1 MAS (Mandatory leaf node prediction on structures).
1: Initialization: Initialize every node (except the root) with ?i ? 0; ? ? {0}; Create a supernode
from each leaf with its ancestors.
2: for iteration=1 to k do
3:
select the unassigned supernode S ? with the largest SNV;
4:
assign all unselected nodes in S ? with ?i ? 1;
5:
? ? ? ? S?;
6:
for each unassigned supernode S do
7:
update the SNV of S (using Algorithm 2 for trees and Algorithm 3 for DAGs);
8:
end for
9: end for
In each iteration, supernode S ? with the largest SNV is selected among all the unassigned supernodes. S ? is then assigned, with the ?i ?s of all its constituent nodes set to 1, and ? is updated
accordingly. For each remaining unassigned supernode
S, we update
P
P its SNV to be the value that it
will take if S is merged with ?, i.e., SNV(S) ? i?S?? wi = i?S\? wi + SNV(?). Since each
unassigned S contains exactly one leaf and we have a tree structure, this update can be implemented
efficiently in O(h2 ) time, where h is the height of the tree (Algorithm 2).
Algorithm 2 Updating the SNV of an
unassigned tree supernode S, containing the leaf l.
1:
2:
3:
4:
5:
6:
node ? l;
SNV(S) ? SNV(?);
repeat
SNV(S) ? SNV(S) + wnode ;
node ? pa(node);
until node ? ?.
Algorithm 3 Updating the SNV of an unassigned DAG
supernode S, containing the leaf l.
1:
2:
3:
4:
5:
6:
7:
8:
insert l to T ;
SNV(S) ? SNV(?);
repeat
node ? find-max(T );
delete node from T ;
SNV(S) ? SNV(S) + wnode ;
insert nodes in Pa(node)\(? ? T ) to T ;
until T = ?.
The following Proposition shows that MAS finds the best k-leaf-sparse prediction.
Proposition 2. Algorithm 1 obtains an optimal ? solution of (4) under the NAP assumption.
Finally, we study the time complexity of Algorithm 1. Step 3 takes O(|L|) time; steps 4 and 5 take
O(h) time; and updating all the remaining unassigned supernodes takes O(h2 |L|) time. Therefore,
each iteration takes O(h2 |L|) time, and the total time to obtain an optimal k-leaf-sparse solution is
O(h2 k|L|). In contrast, a brute-force search will take |L|
k time.
2.2.1
Unknown Number of Labels
In practice, the value of k may not be known. The straightforward approach is to run Algorithm 1
with k = 1, . . . , |L|, and find the ?k ? {?1 , . . . , ?|L| } that maximizes the posterior probability in
(1). However, recall that ?k ? ?k+1 under the NAP assumption. Hence, we can simply set k = |L|,
and ?i is immediately obtained as the ? in iteration i. The total time complexity is O(h2 |L|2 ). In
contrast, a brute-force search takes O(2|L| ) time when k is unknown.
2.3
MLNP that Minimizes Risk
While maximizing the posterior probability minimizes the 0-1 loss, another loss function that has
been popularly used in hierarchical classification is the H-loss [12]. However, along each prediction
path, H-loss only penalizes the first classification mistake closest to the root. On the other hand, we
are more interested in the leaf nodes in MLNP. Hence, we will adopt the symmetric loss instead,
which is defined as `(?, ?
?) = |?\?
?| + |?
?\?|, where ?
? is the true multilabel for the given x, and
? is the prediction. However, this weights mistakes in any part of the hierarchy equally; whereas in
HMC, a mistake that occurs at the higher level of the hierarchy is usually considered more crucial.
4
Let I(?) be the indicator function that returns 1 when the argument holds, 0 otherwise. We thus
P
incorporate the hierarchy structure into `(?, ?
?) by extending it as i ci I(i ? ?\?
?)+ci I(i ? ?
?\?),
where c0 = 1, ci = cpa(i) /nsibl(i) as in [3], and nsibl(i) is the number of siblings of i (including i
itself). Finally, one can also allow different relative importance (? ? 0) for the false positives and
negatives, and generalize `(?, ?
?) further as
X
?
?
?
`(?, ?
?) =
c+
(7)
i I(i ? ?\?) + ci I(i ? ?\?),
i
where
c+
i
=
2ci
1+?
and
c?
i
=
2?ci
1+? .
Given a loss function `(?, ?), from Bayesian decision theory, the optimal multilabel ?? is the one that
P
?
minimizes the expected loss: ?? = arg min? ?
? `(?, ?) p(y?
? = 1, y?
?c = 0|x). The proposed
formulation can be easily extended for this. The following Proposition shows that it leads to a
problem very similar to (4). Extension to a DAG-structured label hierarchy is analogous.
Proposition 3. With a label tree and the loss function in (7), the optimal ?? that minimizes the
?
+
expected loss can be obtained by solving (4), but with wi = (c+
i + ci )p(yi = 1|x) ? ci .
3
Maximum a Posteriori MLNP on Label DAGs
When the label hierarchy is a DAG G, on using the same conditional independence simplification in
Section 2, we have
Y
p(y0 , y1 , . . . , yN |x) = p(y0 |x)
p(yi | yPa(i) , x),
(8)
i?G\{0}
where Pa(i) is the set of parents of node i. The prediction task involves the same optimization
problem as in (2). However, there are now two interpretations on how the labels should respect the
DAG in (3) [1, 11]. The first one requires that if a node is labeled positive, all its parents must also
be positive. In bioinformatics, this is also called the true path rule that governs the DAG-structured
GO taxonomy on gene functions. The alternative is that a node can be labeled positive if at least one
of its parents is positive. Here, we adopt the first interpretation which is more common.
A direct maximization of p(y0 , y1 , . . . , yN |x) by (8) is NP-hard [19]. Moreover, the size of each
probability table p(yi |yPa(i) , x) in (8) grows exponentially with |Pa(i)|. Hence, it can be both impractical and inaccurate when G is large and the sample size is limited. In the following, we assume
Y
Y
1
p(y0 |x)
p(yi | yj , x),
(9)
p(y0 , y1 , . . . , yN |x) =
n(x)
i?G\{0} j?Pa(i)
where n(x) is a normalization term. This follows from the approach of composite likelihood (or
pseudolikelihood) [20] which replaces a difficult probability density function by a set of marginal or
conditional events that are easier to evaluate. In particular, (9) corresponds to the so-called pairwise
conditional likelihood that has been used in longitudinal studies and bioinformatics [21]. Composite
likelihood has been successfully used in different applications such as genetics, spatial statistics
and image analysis. The connection between composite likelihood and various (flat) multilabel
classification models is also recently discussed in [21]. Moreover, by using (9), the 2|Pa(i)| numbers
in the probability table p(yi |yPa(i) , x) are replaced by the |Pa(i)| numbers in {p(yi |yj , x)}j?Pa(i) ,
and thus the estimates obtained are much more reliable. The following Proposition shows that
maximizing (9) can be reduced to a problem similar to (4).
Proposition 4. With the assumption (9), problem (2) for the label DAG can be rewritten as
X
max?
wi ?i
(10)
i?G
s.t.
X
?i = k, ?0 = 1, ?i ? {0, 1} ?i ? G,
i?L
X
?j ? 1 ?i ? Lc : ?i = 1,
j?child(i)
?i ? ?j ?j ? Pa(i), ?i ? G\{0},
5
(11)
?P
i = 0,
?Pl?child(0) log(1 ? pl0 )
i ? L,
where wi= Pj?Pa(i) (log pij ? log(1 ? pij )) P
?
(log
p
?
log(1
?
p
))
+
log(1
?
p
)
i ? Lc \{0},
ij
ij
li
j?Pa(i)
l?child(i)
and pij ? p(yi = 1|yj = 1, x) for j ? Pa(i).
Problem (10) is similar to problem (4), except in the definition of wi and that the hierarchy constraint
(11) is more general than (5). When the DAG is indeed a tree, (10) reduces to (4), and Proposition 4
reduces to Proposition 1. When k is unknown, the same procedure in Section 2.2.1 applies.
In the proof of Proposition 2, we do not constrain the number of parents for each node. Hence, (10)
can be solved efficiently as before, except for two modifications: (i) Each initial supernode now
contains a leaf and its ancestors along all paths to the root. (ii) Since Pa(i) is a set and the hierarchy
is a DAG, updating the SNV gets more complicated. In Algorithm 3, T is a self-balancing binary
search tree (BST) that keeps track of the nodes in S\? using their topological order1 . To facilitate
the checking of whether a node is in ? (step 7), ? also stores its nodes in a self-balancing BST.
Recall that for a self-balancing BST, the operations of insert, delete, find-max and finding an element
all take O(log V ) time, where V ? N is the number of nodes in the BST. Hence, updating the SNV
of one supernode by Algorithm 3 takes O(N log N ) time. As O(|L|) supernodes need to be updated
in each iteration of Algorithm 1, this step (which is the most expensive step in Algorithm 1) takes
O(|L| ? N log N ) time. The total time for Algorithm 1 is O(k ? |L| ? N log N ).
4
Experiments
In this section, experiments are performed on a number of benchmark multilabel data sets2 , with
both tree- and DAG-structured label hierarchies (Table 1). As pre-processing, we remove examples
that contain partial label paths and nodes with fewer than 10 positive examples. At each parent node,
we then train a multitask lasso model with logistic loss using the MALSAR package [22].
4.1
Classification Performance
The proposed MAS algorithm is compared with HMC-LP [14], the only existing algorithm that can
perform MLNP on trees (but not on DAGs). We also compare with the combined use of MetaLabeler
[13] and NMLNP methods as described in Section 1. These NMLNP methods include (i) HBR,
which is modified from the hierarchical classifier H-SVM [3], by replacing its base learner SVM
with the multitask lasso as for MAS; (ii) CLUS-HMC [1]; and (iii) flat BR [23], which is a popular
MLNP method but does not use the hierarchy information. For performance evaluation, we use
the hierarchical F-measure (HF) which has been commonly used in hierarchical classification [4].
Results based on 5-fold cross-validation are shown in Table 1. As can be seen, MAS is always
among the best on almost all data sets.
Next, we compare the methods using the loss in (7), where the relative importance for false positives
vs negatives (?) is set to be the ratio of the numbers of negative and positive training labels. Results
are shown in Table 2. As can be seen, the risk-minimizing version (MASR) can always obtain the
1 1
smallest loss. We also vary ? in the range { 10
, 9 , . . . , 12 , 1, 2, ? ? ? , 9, 10}. As can be seen from
Figure 1, MASR consistently outperforms the other methods, sometimes by a significant margin.
Finally, Figure 2 illustrates some example query images and their misclassifications by MAS, MASR
and BR on the caltech101 data set. As can be seen, even when MAS/MASR misclassifies the image,
the hierarchy often helps to keep the prediction close to the true label.
4.2
Validating the NAP Assumption
In this section, we verify the validity of the NAP assumption. For each test pattern, we use bruteforce search to find its best k-leaf-sparse prediction, and check if it includes the best (k ? 1)-leafsparse prediction. As brute-force search is very expensive, experiments are only performed on four
1
We number the sorted order such that nodes nearer to the root are assigned smaller values. Note that the
topological sort only needs to be performed once as part of pre-processing.
2
Downloaded from http://mulan.sourceforge.net/datasets.html and http://dtai.
cs.kuleuven.be/clus/hmcdatasets/
6
Table 1: HF values obtained by the various methods on all data sets. The best results and those
that are not statistically worse (according to paired t-test with p-value less than 0.05) are in bold.
HMC-LP and CLUS-HMC cannot be run on the caltech101 data, which is large and dense.
data set
rcv1v2 subset1
rcv1v2 subset2
rcv1v2 subset3
rcv1v2 subset4
rcv1v2 subset5
delicious
enron
wipo
caltech-101
seq (funcat)
pheno (funcat)
struc (funcat)
hom (funcat)
cellcycle (funcat)
church (funcat)
derisi (funcat)
eisen (funcat)
gasch1 (funcat)
gasch2 (funcat)
spo (funcat)
expr (funcat)
seq (GO)
pheno (GO)
struc (GO)
hom (GO)
cellcycle (GO)
church (GO)
derisi (GO)
eisen (GO)
gasch1 (GO)
gasch2 (GO)
spo (GO)
expr (GO)
#pattern
4422
4485
4513
4569
4452
768
1607
569
9144
1115
330
1065
1124
1080
1104
995
768
1038
1076
1053
1109
518
227
505
507
484
511
492
404
512
508
494
504
#leaf
42
43
46
44
45
49
24
21
102
36
14
33
35
33
35
33
29
32
33
32
32
32
19
33
29
29
28
31
28
32
32
32
35
avg #leaf
per
pattern
1.3
1.3
1.3
1.3
1.4
5.4
2.6
1
1
1.8
1.6
1.8
1.8
1.9
1.8
1.8
1.8
1.8
1.8
1.8
1.8
3.6
3.5
3.5
3.2
3.1
3.2
3.4
3.4
3.4
3.3
3.3
3.5
MAS
0.85
0.85
0.85
0.86
0.84
0.53
0.75
0.83
0.82
0.26
0.25
0.23
0.35
0.20
0.17
0.18
0.28
0.25
0.24
0.18
0.28
0.52
0.57
0.51
0.65
0.49
0.57
0.56
0.48
0.64
0.55
0.50
0.49
(hierarchical)
(flat)
(with MetaLabeler)
HMC-LP HBR CLUS-HMC
BR
0.22
0.83
0.63
0.83
0.21
0.84
0.64
0.84
0.20
0.83
0.63
0.83
0.21
0.84
0.64
0.84
0.21
0.83
0.63
0.83
0.23
0.28
0.57
0.54
0.72
0.74
0.68
0.74
0.42
0.83
0.71
0.83
0.82
0.70
0.15
0.25
0.26
0.23
0.12
0.25
0.20
0.23
0.03
0.25
0.21
0.24
0.21
0.36
0.27
0.36
0.12
0.21
0.19
0.19
0.05
0.18
0.20
0.17
0.08
0.18
0.21
0.18
0.10
0.29
0.28
0.27
0.11
0.23
0.29
0.22
0.05
0.22
0.25
0.25
0.10
0.18
0.23
0.18
0.12
0.25
0.25
0.27
0.58
0.59
0.61
0.53
0.49
0.55
0.48
0.55
0.53
0.60
0.59
0.63
0.49
0.51
0.51
0.50
0.53
0.54
0.49
0.53
0.54
0.54
0.57
0.57
0.56
0.57
0.58
0.50
0.51
0.53
0.47
0.49
0.51
0.57
0.55
0.60
smaller data sets for k = 2, . . . , 10. Figure 3 shows the percentage of test patterns satisfying the
NAP assumption at different values of k. As can be seen, the NAP holds almost 100% of the time.
5
Conclusion
In this paper, we proposed a novel hierarchical multilabel classification (HMC) algorithm for mandatory leaf node prediction. Unlike many hierarchical multilabel/multiclass classification algorithms,
it utilizes the global hierarchy information by finding the multilabel that has the largest posterior
probability over all the node labels. By adopting a weak ?nested approximation? assumption, which
is already implicitly assumed in many HMC algorithms, we showed that this can be efficiently
optimized by a simple greedy algorithm. Moreover, it can be extended to minimize the risk associated with the (hierarchically weighted) symmetric loss. Experiments performed on a number of
real-world data sets demonstrate that the proposed algorithms are computationally simple and more
accurate than existing HMC and flat multilabel classification methods.
Acknowledgment
This research has been partially supported by the Research Grants Council of the Hong Kong Special
Administrative Region under grant 614012.
7
0.12
0.4
MASR
MAS
HBR
0.11
0.4
MASR
MAS
HBR
0.38
0.36
MASR
MAS
HBR
0.35
0.09
0.08
0.07
0.34
Average Testing Loss
Average Testing Loss
Average Testing Loss
0.1
0.32
0.3
0.28
0.3
0.25
0.26
0.06
0.24
0.05
0.04
0.2
0.22
1/10
1/5
1
5
0.2
10
1/10
1/5
(a) rcv1subset1
1
5
10
1/10
1/5
(b) enron
1
5
10
(c) struc(funcat)
Figure 1: Hierarchically weighted symmetric loss values (7) for different ??s.
root
root
inanimate
animate
animal
insect
water
lobster
crayfish
Query
dolphin
accordion
MASR
MAS
air
water
wind
BR
crocodile
Query
animate
transportation
animal
music
butterfly
root
inanimate
animate
ibis
MASR
water lily
airplane
MAS
human
plant
insect
flower
air
BR
Query
sunflower
MASR
face
butterfly
MAS
BR
Figure 2: Example misclassifications on the caltech101 data set.
Table 2: Hierarchically weighted symmetric loss values (7) on the tree-structured data sets.
MASR
0.05
0.04
0.04
0.04
0.05
0.23
0.31
0.07
0.00
0.24
0.39
0.29
0.32
0.24
0.26
0.26
0.30
0.24
0.30
0.31
0.24
MAS
0.10
0.09
0.09
0.10
0.10
0.19
0.36
0.09
0.01
0.26
0.38
0.39
0.36
0.29
0.30
0.30
0.36
0.27
0.27
0.29
0.26
(used with MetaLabeler)
HBR CLUS-HMC
BR
0.12
0.20
0.13
0.11
0.19
0.12
0.11
0.20
0.12
0.11
0.19
0.11
0.11
0.20
0.12
0.14
0.13
0.14
0.35
0.41
0.35
0.09
0.16
0.09
0.01
0.01
0.38
0.38
0.41
0.38
0.55
0.41
0.89
0.41
0.40
0.36
0.34
0.32
0.29
0.38
0.30
0.30
0.41
0.31
0.30
0.43
0.30
0.38
0.36
0.38
0.29
0.39
0.29
0.29
0.39
0.29
0.30
0.40
0.30
0.28
0.39
0.28
HMC-LP
0.46
0.45
0.45
0.44
0.46
0.23
0.25
0.34
0.41
0.61
0.42
0.37
0.41
0.42
0.45
0.39
0.43
0.42
0.42
0.41
100
100
99
99
99
98
97
96
95
94
93
92
91
90
98
97
96
95
94
93
92
91
2
4
6
8
10
k
(a) pheno(funcat).
90
instances satisfying NAP(%)
100
99
instances satisfying NAP(%)
100
instances satisfying NAP(%)
instances satisfying NAP(%)
data set
rcv1v2 subset1
rcv1v2 subset2
rcv1v2 subset3
rcv1v2 subset4
rcv1v2 subset5
delicious
enron
wipo
caltech-101
seq (funcat)
pheno (funcat)
struc (funcat)
hom (funcat)
cellcycle (funcat)
church (funcat)
derisi (funcat)
eisen (funcat)
gasch1 (funcat)
gasch2 (funcat)
spo (funcat)
expr (funcat)
98
97
96
95
94
93
92
91
2
4
6
8
90
10
k
97
96
95
94
93
92
91
2
4
6
8
10
k
(b) pheno(GO).
98
(c) eisen(funcat).
90
2
4
6
8
k
(d) eisen(GO).
Figure 3: Percentage of patterns satisfying the NAP assumption at different values of k.
8
10
References
[1] C. Vens, J. Struyf, L. Schietgat, S. Dvzeroski, and H. Blockeel. Decision trees for hierarchical multi-label
classification. Machine Learning, 73:185?214, 2008.
[2] J.J. Burred and A. Lerch. A hierarchical approach to automatic musical genre classification. In Proceedings of the 6th International Conference on Digital Audio Effects, 2003.
[3] N. Cesa-Bianchi, C. Gentile, and L. Zaniboni. Incremental algorithms for hierarchical classification.
Journal of Machine Learning Research, 7:31?54, 2006.
[4] C.N. Silla and A.A. Freitas. A survey of hierarchical classification across different application domains.
Data Mining and Knowledge Discovery, 22(1-2):31?72, 2011.
[5] Z. Barutcuoglu and O.G. Troyanskaya. Hierarchical multi-label prediction of gene function. Bioinformatics, 22:830?836, 2006.
[6] K. Punera, S. Rajan, and J. Ghosh. Automatically learning document taxonomies for hierarchical classification. In Proceedings of the 14th International Conference on World Wide Web, pages 1010?1011,
2005.
[7] M.-L. Zhang and K. Zhang. Multi-label learning by exploiting label dependency. In Proceedings of the
16th International Conference on Knowledge Discovery and Data Mining, pages 999?1008, 2010.
[8] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In Advances in
Neural Information Processing Systems 23, pages 163?171. 2010.
[9] J. Deng, S. Satheesh, A.C. Berg, and L. Fei-Fei. Fast and balanced: Efficient label tree learning for large
scale object recognition. In Advances in Neural Information Processing Systems 24, pages 567?575.
2011.
[10] J. Rousu, C. Saunders, S. Szedmak, and J. Shawe-Taylor. Kernel-based learning of hierarchical multilabel
classification models. Journal of Machine Learning Research, 7:1601?1626, 2006.
[11] W. Bi and J.T. Kwok. Multi-label classification on tree- and DAG-structured hierarchies. In Proceedings
of the 28th International Conference on Machine Learning, pages 17?24, 2011.
[12] N. Cesa-Bianchi, C. Gentile, and L. Zaniboni. Hierarchical classification: Combining Bayes with SVM.
In Proceedings of the 23rd International Conference on Machine Learning, pages 177?184, 2006.
[13] L. Tang, S. Rajan, and V.K. Narayanan. Large scale multi-label classification via metalabeler. In Proceedings of the 18th International Conference on World Wide Web, pages 211?220, 2009.
[14] R. Cerri, A. C. P. L. F. de Carvalho, and A. A. Freitas. Adapting non-hierarchical multilabel classification
methods for hierarchical multilabel classification. Intelligent Data Analysis, 15:861?887, 2011.
[15] G. Tsoumakas and I. Vlahavas. Random k-labelsets: An ensemble method for multilabel classification.
In Proceedings of the 18th European Conference on Machine Learning, pages 406?417, Warsaw, Poland,
2007.
[16] N. Cesa-Bianchi, C. Gentile, A. Tironi, and L. Zaniboni. Incremental algorithms for hierarchical classification. In Advances in Neural Information Processing Systems 17, pages 233?240. 2005.
[17] J.H. Zaragoza, L.E. Sucar, and EF Morales. Bayesian chain classifiers for multidimensional classification.
In Twenty-Second International Joint Conference on Artificial Intelligence, pages 2192?2197, 2011.
[18] R.G. Baraniuk, V. Cevher, M.F. Duarte, and C. Hegde. Model-based compressive sensing. IEEE Transactions on Information Theory, 56:1982?2001, 2010.
[19] S.E. Shimony. Finding maps for belief networks is NP-hard. Artificial Intelligence, 68:399?410, 1994.
[20] C. Varin, N. Reid, and D. Firth. An overview of composite likelihood methods. Statistica Sinica, 21:5?42,
2011.
[21] Y. Zhang and J. Schneider. A composite likelihood view for multi-label classification. In Proceedings of
the 15th International Conference on Artificial Intelligence and Statistics, pages 1407?1415, 2012.
[22] J. Zhou, J. Chen, and J. Ye. MALSAR: Multi-tAsk Learning via StructurAl Regularization. Arizona State
University, 2012.
[23] G. Tsoumakas, I. Katakis, and I. Vlahavas. Mining multi-label data. In Data Mining and Knowledge
Discovery Handbook, pages 667?685. Springer, 2010.
9
| 4520 |@word multitask:3 kong:3 version:1 stronger:2 c0:1 heuristically:1 pick:2 recursively:1 initial:1 contains:4 denoting:1 document:3 longitudinal:1 outperforms:2 existing:3 freitas:2 ust:1 written:1 must:1 remove:1 designed:1 update:3 v:1 implying:1 greedy:5 leaf:38 pursued:1 selected:2 fewer:1 accordingly:1 intelligence:3 yi1:1 node:58 cse:1 zhang:3 height:1 along:2 direct:1 pairwise:1 indeed:1 expected:3 ontology:1 multi:9 automatically:1 mulan:1 notation:1 moreover:8 maximizes:2 katakis:1 pursue:2 minimizes:4 compressive:1 finding:3 ghosh:1 impractical:1 every:1 multidimensional:1 exactly:2 classifier:4 qm:1 brute:3 bst:4 grant:2 yn:5 reid:1 positive:11 before:1 engineering:1 mistake:3 nap:16 blockeel:1 path:17 abuse:1 initialization:1 studied:1 limited:1 bi:2 range:1 statistically:1 directed:1 practical:1 acknowledgment:1 yj:3 testing:3 union:1 recursive:1 practice:1 differs:1 procedure:1 empirical:1 significantly:1 composite:5 adapting:1 pre:2 cpa:1 get:1 cannot:2 close:1 risk:3 map:1 hegde:1 transportation:1 maximizing:3 go:17 straightforward:1 survey:2 immediately:1 subgraphs:1 rule:2 estimator:2 embedding:1 analogous:1 updated:2 hierarchy:30 target:2 suppose:1 pa:16 associate:1 element:1 expensive:4 particularly:1 updating:6 satisfying:6 recognition:1 predicts:1 labeled:7 solved:2 region:1 malsar:2 balanced:1 complexity:2 multilabel:25 solving:1 animate:3 learner:1 easily:2 joint:2 k0:1 represented:1 various:2 genre:1 train:4 fast:1 artificial:4 query:4 varin:1 saunders:1 whose:1 larger:1 solve:1 otherwise:3 compressed:1 statistic:2 itself:1 butterfly:2 net:1 propose:2 product:1 relevant:2 combining:1 everyday:1 constituent:1 sourceforge:1 dolphin:1 parent:8 exploiting:1 extending:1 incremental:2 object:1 help:1 depending:1 ac:1 ij:2 implemented:1 predicted:1 involves:2 c:1 merged:1 popularly:1 human:1 tsoumakas:2 require:2 assign:1 proposition:11 biological:1 im:1 yij:1 insert:3 pl:3 extension:1 hold:3 considered:1 warsaw:1 predict:1 vary:1 early:1 adopt:2 smallest:1 label:45 troyanskaya:1 council:1 largest:5 create:2 successfully:1 weighted:3 minimization:1 always:5 modified:1 zhou:1 unassigned:9 focus:1 consistently:2 likelihood:6 check:1 hk:1 contrast:3 duarte:1 posteriori:3 membership:1 inaccurate:1 initially:2 ancestor:2 i1:4 interested:1 arg:1 classification:41 among:3 pascal:1 denoted:1 html:1 insect:2 misclassifies:1 spatial:1 special:1 initialize:1 animal:2 marginal:1 once:1 np:2 intelligent:1 simultaneously:1 resulted:1 replaced:1 powerset:2 mining:4 evaluation:1 chain:1 subtrees:3 accurate:1 partial:5 ypa:9 tree:25 indexed:1 taylor:1 penalizes:1 delete:2 cevher:1 instance:4 shimony:1 maximization:1 subset:3 struc:4 dependency:1 combined:1 density:1 international:8 again:1 satisfied:1 cesa:3 containing:3 possibly:1 worse:1 return:1 li:1 lily:1 de:1 bold:1 includes:2 performed:6 root:14 lot:3 pl0:1 wind:1 view:1 hf:2 sort:1 complicated:1 bayes:1 minimize:1 air:2 musical:3 efficiently:5 ensemble:1 generalize:1 weak:2 bayesian:4 jamesk:1 none:1 definition:4 lobster:1 james:1 resultant:1 associated:3 proof:1 stop:2 adjusting:1 popular:1 recall:2 knowledge:3 organized:3 higher:1 wei:1 arranged:1 formulation:1 though:1 until:2 hand:1 web:2 replacing:1 pheno:5 mode:1 logistic:1 grows:1 facilitate:1 effect:1 ye:1 validity:1 true:3 verify:1 contain:1 hence:7 assigned:2 regularization:1 symmetric:5 semantic:2 i2:1 zaragoza:1 conditionally:1 self:3 funcat:28 hong:3 demonstrate:2 spo:3 meaning:2 image:4 snv:20 novel:2 recently:2 ef:1 wikipedia:1 common:2 functional:1 overview:1 exponentially:1 discussed:2 slight:1 extend:1 interpretation:2 significant:1 dag:18 automatic:1 rd:1 grangier:1 shawe:1 lowered:1 crocodile:1 cerri:2 base:1 posterior:9 closest:1 recent:3 showed:1 mandatory:7 store:1 meta:1 binary:2 zaniboni:3 delicious:2 yi:13 caltech:2 seen:5 gentile:3 schneider:1 deng:1 maximize:1 signal:2 ii:4 full:2 multiple:2 reduces:2 cross:1 equally:1 paired:1 prediction:29 rousu:1 iteration:5 sometimes:2 adopting:2 normalization:1 wipo:2 achieved:1 kernel:1 labelsets:1 whereas:4 want:1 derisi:3 crucial:1 rest:1 specially:1 unlike:1 enron:3 validating:1 integer:1 structural:1 iii:1 lerch:1 bengio:1 independence:1 misclassifications:2 lasso:3 reduce:1 idea:1 cellcycle:3 multiclass:7 sibling:3 br:7 airplane:1 whether:1 sunflower:1 song:1 remark:1 useful:1 clear:1 governs:1 extensively:1 locally:1 narayanan:1 reduced:1 http:2 percentage:2 track:2 per:1 rajan:2 group:2 four:1 terminology:1 threshold:5 pj:1 utilize:1 graph:1 year:1 run:2 package:1 baraniuk:1 almost:2 decide:1 seq:3 utilizes:1 decision:2 hom:3 simplification:3 fold:1 replaces:1 topological:2 arizona:1 constraint:1 constrain:2 fei:2 flat:6 tag:1 argument:1 min:1 concluding:1 performing:1 department:1 structured:9 according:1 describes:1 smaller:3 across:1 y0:9 wi:9 lp:5 making:2 modification:1 ibis:1 computationally:1 kuleuven:1 end:4 operation:1 rewritten:2 kwok:2 hierarchical:38 yim:1 vlahavas:2 alternative:1 clustering:1 ensure:1 running:1 remaining:2 include:1 unsuitable:1 music:1 expr:3 already:1 occurs:1 strategy:1 traditional:1 considers:3 trivial:1 water:4 besides:1 index:1 relationship:1 ratio:1 minimizing:1 equivalently:1 difficult:4 hmc:23 sinica:1 taxonomy:4 negative:3 satheesh:1 unknown:3 perform:2 bianchi:3 twenty:1 datasets:1 benchmark:1 inanimate:2 supernodes:5 extended:5 y1:3 introduced:1 complement:1 required:1 subvector:1 connection:1 optimized:1 catalog:1 learned:2 nearer:1 usually:1 pattern:10 flower:1 challenge:1 program:1 max:5 including:1 video:1 reliable:1 belief:1 event:1 rely:1 force:3 predicting:1 indicator:2 firth:1 technology:1 unselected:2 church:3 barutcuoglu:1 szedmak:1 text:1 poland:1 discovery:3 checking:1 relative:2 loss:19 plant:1 acyclic:1 carvalho:1 validation:1 h2:5 downloaded:1 digital:1 pij:3 pi:3 balancing:3 morale:1 genetics:1 caltech101:3 repeat:2 last:1 keeping:1 dtai:1 supported:1 allow:1 pseudolikelihood:1 fall:1 wide:2 face:1 sparse:6 world:5 eisen:5 commonly:2 avg:1 transaction:1 obtains:1 implicitly:2 gene:5 keep:2 supernode:13 global:2 handbook:1 assumed:2 alternatively:1 search:5 bay:1 table:7 hbr:6 rcv1v2:10 european:1 domain:1 significance:1 main:1 yi2:1 dense:1 hierarchically:3 statistica:1 allowed:2 child:8 repeated:1 clus:5 crayfish:1 lc:3 candidate:1 administrative:1 tang:1 down:1 sensing:2 svm:3 false:2 importance:2 ci:8 subtree:2 illustrates:1 margin:1 chen:1 easier:1 simply:1 partially:1 applies:1 springer:1 corresponds:2 nested:5 ma:16 weston:1 conditional:3 sorted:1 shared:1 feasible:1 hard:2 except:3 called:6 total:3 experimental:1 ya:1 select:1 berg:1 internal:4 bioinformatics:3 multilabels:1 incorporate:1 evaluate:1 audio:2 |
3,890 | 4,521 | Bayesian estimation of discrete entropy with mixtures
of stick-breaking priors
Evan Archer?124 , Il Memming Park?234 , & Jonathan W. Pillow234
1. Institute for Computational and Engineering Sciences
2. Center for Perceptual Systems, 3. Dept. of Psychology,
4. Division of Statistics & Scientific Computation
The University of Texas at Austin
Abstract
We consider the problem of estimating Shannon?s entropy H in the under-sampled
regime, where the number of possible symbols may be unknown or countably
infinite. Dirichlet and Pitman-Yor processes provide tractable prior distributions
over the space of countably infinite discrete distributions, and have found major
applications in Bayesian non-parametric statistics and machine learning. Here
we show that they provide natural priors for Bayesian entropy estimation, due
to the analytic tractability of the moments of the induced posterior distribution
over entropy H. We derive formulas for the posterior mean and variance of H
given data. However, we show that a fixed Dirichlet or Pitman-Yor process prior
implies a narrow prior on H, meaning the prior strongly determines the estimate
in the under-sampled regime. We therefore define a family of continuous mixing
measures such that the resulting mixture of Dirichlet or Pitman-Yor processes
produces an approximately flat prior over H. We explore the theoretical properties
of the resulting estimators and show that they perform well on data sampled from
both exponential and power-law tailed distributions.
1
Introduction
An important statistical problem in the study of natural systems is to estimate the entropy of an
unknown discrete distribution on the basis of an observed sample. This is often much easier than
the problem of estimating the distribution itself; in many cases, entropy can be accurately estimated
with fewer samples than the number of distinct symbols. Entropy estimation remains a difficult
problem, however, as there is no unbiased estimator for entropy, and the maximum likelihood estimator exhibits severe bias for small datasets. Previous work has tended to focus on methods for
computing and reducing this bias [1?5]. Here, we instead take a Bayesian approach, building on a
framework introduced by Nemenman et al [6]. The basic idea is to place a prior over the space of
probability distributions that might have generated the data, and then perform inference using the
induced posterior distribution over entropy. (See Fig. 1).
We focus on the setting where our data are a finite sample from an unknown, or possibly even countably infinite, number of symbols. A Bayesian approach requires us to consider distributions over
the infinite-dimensional simplex, 1 . To do so, we employ the Pitman-Yor (PYP) and Dirichlet
(DP) processes [7?9]. These processes provide an attractive family of priors for this problem, since:
(1) the posterior distribution over entropy has analytically tractable moments; and (2) distributions
drawn from a PYP can exhibit power-law tails, a feature commonly observed in data from social, biological, and physical systems [10?12]. However, we show that a fixed PYP prior imposes a narrow
?
These authors contributed equally.
1
parameter
distribution
entropy
...
data
Figure 1: Graphical model illustrating the ingredients for Bayesian entropy estimation. Arrows indicate conditional dependencies between variables, and
the gray ?plate? denotes multiple copies of a random
variable (with the number of copies N indicated at
bottom). For entropy estimation, the joint probability distribution over entropy H, data x = {xj }, discrete distribution ? = {?i }, and parameter ? factorizes as: p(H, x, ?, ?) = p(H|?)p(x|?)p(?|?)p(?).
EntropyP
is a deterministic function of ?, so p(H|?) =
(H
i ?i log ?i ).
prior over entropy, leading to severe bias and overly narrow credible intervals for small datasets. We
address this shortcoming by introducing a set of mixing measures such that the resulting Pitman-Yor
Mixture (PYM) prior provides an approximately non-informative (i.e., flat) prior over entropy.
The remainder of the paper is organized as follows. In Section 2, we introduce the entropy estimation
problem and review prior work. In Section 3, we introduce the Dirichlet and Pitman-Yor processes
and discuss key mathematical properties relating to entropy. In Section 4, we introduce a novel
entropy estimator based on PYM priors and derive several of its theoretical properties. In Section 5,
we show applications to data.
2
Entropy Estimation
N
A
Consider samples x := {xj }j=1 drawn iid from an unknown discrete distribution ? := {?i }i=1 on
a finite or (countably) infinite alphabet X. We wish to estimate the entropy of ?,
H(?) =
A
X
?i log ?i ,
(1)
i=1
where we identify X = {1, 2, . . . , A} as the set of alphabets without loss of generality (where the
alphabet size A may be infinite), and ?i > 0 denotes the probability of observing symbol i. We
focus on the setting where N ? A.
A reasonable first step toward estimating H is to estimate the distribution ?. The sum of obPN
? where
served counts nk =
i=1 1{xi =k} for each k 2 X yields the empirical distribution ?,
?
?k = nk /N . Plugging this estimate for ? into eq. 1, we obtain the so-called ?plugin? estimator:
? plugin = P ?
H
?i log ?
?i , which is also the maximum-likelihood estimator. It exhibits substantial
negative bias in the undersampled regime.
2.1
Bayesian entropy estimation
The Bayesian approach to entropy estimation involves formulating a prior over distributions ?, and
then turning the crank of Bayesian inference to infer H using the posterior distribution. Bayes? least
squares (BLS) estimators take the form:
Z
?
H(x) = E[H|x] = H(?)p(?|x) d?
(2)
where
p(?|x) is the posterior over ? under some prior p(?) and categorical
Q
P likelihood p(x|?) =
p(x
|?),
where
p(x
=
i)
=
?
.
The
conditional
p(H|?)
=
(H
j
j
i
j
i ?i log ?i ), since H is
deterministically related to ?. To the extent that p(?) expresses our true prior uncertainty over the
unknown distribution that generated the data, this estimate is optimal in a least-squares sense, and
the corresponding credible intervals capture our uncertainty about H given the data.
For distributions with known finite alphabet size A, the Dirichlet distribution provides an obvious
choice of prior due to its conjugacy to the discrete (or multinomial) likelihood. It takes the form
QA
P
p(?) / i=1 ?i? 1 , for ? on the A-dimensional simplex (?i 1,
?i = 1), with concentration
2
Neural Alphabet Frequency
(27 spiking neurons)
0
10
10
10
10
10
1
10
P[wordcount > n]
P[wordcount > n]
10
Word Frequency in Moby Dick
0
10
2
3
10
10
4
10
cell data
95% confidence
5
0
10
1
10
2
10
3
4
10
10
wordcount n
10
5
10
1
2
3
DP
PY
word data
95% confidence
4
5
0
10
1
10
2
10
wordcount n
3
10
Figure 2: Power-law frequency distributions from neural signals and natural language. We compare
samples from the DP (red) and PYP (blue) priors for two datasets with heavy tails (black). In both
cases, we compare the empirical CDF with distributions sampled given d and ? fixed to their ML
estimates. For both datasets, the PYP better captures the heavy-tailed behavior of the data. Left:
Frequencies among N = 1.2e6 neural spike words from 27 simultaneously-recorded retinal ganglion
cells, binarized and binned at 10 ms [18]. Right: Frequency of N = 217826 words in the novel Moby
Dick by Herman Melville.
parameter ? [13]. Many previously proposed estimators can be viewed as Bayesian estimators with
a particular fixed choice of ?. (See [14] for an overview).
2.2
Nemenman-Shafee-Bialek (NSB) estimator
In a seminal paper, Nemenman et al [6] showed that Dirichlet priors impose a narrow prior over
entropy. In the under-sampled regime, Bayesian estimates using a fixed Dirichlet prior are severely
biased, and have small credible intervals (i.e., they give highly confident wrong answers!). To address this problem, [6] suggested a mixture-of-Dirichlets prior:
Z
p(?) = pDir (?|?)p(?)d?,
(3)
where pDir (?|?) denotes a Dir(?) prior on ?. To construct an approximately flat prior on entropy,
[6] proposed the mixing weights on ? given by,
d
E[H|?] = A 1 (A? + 1)
(4)
p(?) /
1 (? + 1),
d?
where E[H|?] denotes the expected value of H under a Dir(?) prior, and 1 (?) denotes the trigamma function. To the extent that p(H|?) resembles a delta function, eq. 3 implies a uniform prior
for H on [0, log A].The BLS estimator under the NSB prior can then be written as,
ZZ
Z
p(x|?)p(?)
? nsb = E[H|x] =
H
H(?)p(?|x, ?) p(?|x) d? d? = E[H|x, ?]
d?, (5)
p(x)
where E[H|x, ?] is the posterior mean under a Dir(?) prior, and p(x|?) denotes the evidence,
which has a Polya distribution. Given analytic expressions for E[H|x, ?] and p(x|?), this estimate
is extremely fast to compute via 1D numerical integration in ?. (See Appendix for details).
Next, we shall consider the problem of extending this approach to infinite-dimensional discrete
? nsb in the
distributions. Nemenman et al proposed one such extension using an approximation to H
?
?
limit A ! 1,which we refer to as Hnsb1 [15, 16]. Unfortunately, Hnsb1 increases unboundedly
with N (as noted by [17]), and it performs poorly for the examples we consider.
3
Stick-Breaking Priors
To construct a prior over countably infinite discrete distributions we employ a class of distributions
from nonparametric Bayesian statistics known as stick-breaking processes [19]. In particular, we
3
focus on two well-known subclasses of stick-breaking processes: the Dirichlet Process (DP) and
Pitman-Yor process (PYP). Both are stochastic processes whose samples
P1 are discrete probability
distributions [7, 20]. A sample from a DP or PYP may be written as i=1 ?i i , where ? = {?i }
denotes a countably infinite set of ?weights? on a set of atoms { i } drawn from some base probability
measure, where i denotes a delta function on the atom i .1 The prior distribution over ? under
the DP and PYP is technically called the GEM distribution or the two-parameter Poisson-Dirichlet
distribution, but we will abuse terminology and refer to it more simply as script notation DP or PY.
The DP weight distribution DP(?) may be described as a limit of the finite Dirichlet distributions
where the alphabet size grows and concentration parameter shrinks, A ! 1 and ?0 ! 0, such that
?0
A ! ? [20]. The PYP generalizes the DP to allow power-law tails, and includes DP as a special
case [7].
Let PY(d, ?) denote the PYP weight distribution with discount parameter d and concentration parameter ? (also called the ?Dirichlet parameter?), for d 2 [0, 1), ? > d. When d = 0, this reduces
to the DP weight distribution, denoted DP(?). The name ?stick-breaking? refers to the fact that
the weights of the DP and PYP can be sampled by transforming an infinite sequence of independent Beta random variables in a procedure known as ?stick-breaking? [21]. Stick-breaking provides
samples ? ? PY(d, ?) according to:
i
? Beta(1
d, ? + id)
?
?i =
iY
1
(1
k ) i,
(6)
k=1
where ?
?i is known as the i?th size-biased sample from
(The ?
?i sampled in this manner are not
P?.
1
strictly decreasing, but decrease on average such that i=1 ?
?i = 1 with probability 1). Asymptotically, the tails of a (sorted) sample from DP(?) decay exponentially, while for PY(d, ?) with d 6= 0,
1
the tails approximately follow a power-law: ?i / (i) d ( [7], pp. 867)2 . Many natural phenomena
such as city size, language, spike responses, etc., also exhibit power-law tails [10, 12]. (See Fig. 2).
3.1
Expectations over DP and PY weight distributions
A key virtue of PYP priors is a mathematical property called invariance under size-biased sampling,
which allows us to convert expectations over ? on the infinite-dimensional simplex to one or twodimensional integrals with respect to the distribution of the first two size-biased samples [23, 24].
These expectations are required for computing the mean and variance of H under the prior (or
posterior) over ?.
Proposition 1 (Expectations with first two size-biased samples). For ? ? PY(d, ?) and arbitrary
integrable functionals f and g of ?,
"1
#
?
X
f (?
?1 )
E(?|d,?)
f (?i ) = E(??1 |d,?)
,
(7)
?
?
1
i=1
2
3
X
E(?|d,?) 4
g(?i , ?j )5 = E(??1 ,??2 |d,?) [g(?
?1 , ?
?2 )(1 ?
?1 )] ,
(8)
i,j6=i
where ?
?1 and ?
?2 are the first two size-biased samples from ?.
The first result (eq. 7) appears in [7], and we construct an analogous proof for eq. 8 (see Appendix).
The direct consequence of this lemma is that first two moments of H(?) under the DP and PY
priors have closed forms , which can be obtained using (from eq. 6): ?
?1 ? Beta(1 d, ? + d), and
?
?2 /(1 ?
?1 )|?
?1 ? Beta(1 d, ?+2d), with f (?i ) = ?i log(?i ) for E[H], and f (?i ) = ?i2 (log ?i )2
and g(?i , ?j ) = ?i ?j (log ?i )(log ?j ) for E[H 2 ].
1
Here, we will assume the base measure is non-atomic, so that the atoms i are distinct with probability
1. This allows us to ignore the base measure, making entropy of the distribution equal to the entropy of the
weights ?.
2
Note that the power-law exponent is given incorrectly in [9, 22].
4
Prior Uncertainty
standard deviation (nats)
expected entropy (nats)
Prior Mean
30
20
10
0 0
10
5
1
0
10
10
10
d=0.9
d=0.8
d=0.7
d=0.6
d=0.5
d=0.4
d=0.3
d=0.2
d=0.1
d=0.0
2
0
5
10
10
10
10
Figure 3: Prior mean and standard deviation over entropy H under a fixed PY prior, as a function of
? and d. Note that expected entropy is approximately linear in log ?. Small prior standard deviations
(right) indicate that p(H(?)|d, ?) is highly concentrated around the prior mean (left).
3.2
Posterior distribution over weights
A second desirable property of the PY distribution is that the posterior p(?post |x, d, ?) takes the
form of a (finite) Dirichlet mixture of point masses and a PY distribution [8]. This makes it possible
to apply the above results to the posterior mean and variance of H.
P
Let ni denote the count of symbol i in an observed dataset. Then let ?i = ni d, N =
ni ,
P
P
PA
and A =
?i = i ni Kd = N Kd, where K = i=1 1{ni >0} is the number of unique
symbols observed. Given data, the posterior over (countably infinite) discrete distributions, written
as ?post = (p1 , p2 , p3 , . . . , pK , p? ?), has the distribution (given in [19]):
(p1 , p2 , p3 , . . . , pK , p? ) ? Dir(n1 d, n2 d, . . . , nK
? := (?1 , ?2 , ?3 , . . . ) ? PY(d, ? + Kd).
4
4.1
d, ? + Kd)
(9)
Bayesian entropy inference with PY priors
Fixed PY priors
Using the results of the previous section (eqs. 7 and 8), we can derive the prior mean and variance
of H under a PY(d, ?) prior on ?:
E[H(?)|d, ?] =
0 (1
+ ?)
?+d
var[H(?)|d, ?] =
(1 + ?)2 (1
d),
1 d
+
d) 1 + ?
(10)
0 (1
1 (2
d)
1 (2
+ ?),
(11)
where n is the polygamma of n-th order (i.e., 0 is the digamma function). Fig. 3 shows these
functions for a range of d and ? values. These reveal the same phenomenon that [6] observed for
finite Dirichlet distributions: a PY prior with fixed (d, ?) induces a narrow prior over H. In the
undersampled regime, Bayesian estimates under PY priors will therefore be strongly determined by
the choice of (d, ?), and posterior credible intervals will be unrealistically narrow.3
4.2
Pitman-Yor process mixture (PYM) prior
The narrow prior on H induced by fixed PY priors suggests a strategy for constructing a noninformative prior: mix together a family of PY distributions with some hyper-prior p(d, ?) selected
to yield an approximately flat prior on H. Following the approach of [6], we setting p(d, ?) proportional to the derivative of the expected entropy. This leaves one extra degree of freedom, since large
3
The only exception is near the corner d ! 1 and ? ! d. There, one can obtain arbitrarily large prior
variance over H for given mean. However, these such priors have very heavy tails and seem poorly suited to
data with finite or exponential tails; we do not explore them further here.
5
0.05
1
0
0.06
1
p(H)
15
10
0.04
0.02
0
5
0.06
0
p(H)
entropy (nats)
20
0.1
(new params)
p(H)
(standard params)
0
0
10
20
0
10
20
0.04
0.02
0
0
1
2
3
4
5
Entropy (H)
Figure 4: Expected entropy under Pitman-Yor and Pitman-Yor Mixture priors. (A) Left: expected
entropy as a function of the natural parameters (d, ?). Right: expected entropy as a function of
transformed parameters (h, ). (B) Sampled prior distributions (N = 5e3) over entropy implied
by three different PY mixtures: (1) p( , h) / (
1) (red), a mixture of PY(d, 0) distributions; (2)
p( , h) / ( ) (blue), a mixture of DP(?) distributions; and (3) p( , h) / exp( 110 ) (grey), which
provides a tradeoff between (1) & (2). Note that the implied prior over H is approximately flat.
prior entropies can arise either from large values of ? (as in the DP) or from values of d near 1. (See
Fig. 4A). We can explicitly control this trade-off by reparametrizing the PY distribution, letting
d)
0 (1
,
(12)
+ ?)
d)
0 (1
where h > 0 is equal to the expected entropy of the prior (eq. 10) and > 0 captures prior beliefs
about tail behavior of ?. For = 0, we have the DP (d = 0); for = 1 we have a PY(d, 0)
process (i.e., ? = 0). Where required, the inverse transformation to standard PY parameters is given
1
by: ? = 0 1 (h(1
) + 0 (1)) 1, d = 1
( 0 (1) h ) , where 0 1 (?) denotes the
0
inverse digamma function.
h=
0 (1
+ ?)
0 (1
d),
=
0 (1)
0 (1
We can construct an (approximately) flat improper prior over H on [0, 1] by setting p(h, ) = q( ),
where q is any density on [0, 1]. The induced prior on entropy is thus:
ZZ
p(H) =
p(H|?)pPY (?| , h)p( , h)d dh,
(13)
where pPY (?| , h) denotes a PY distribution on ? with parameters , h. Fig. 4B shows samples
from this prior under three different choices of q( ), for h uniform on [0, 3]. We refer to the resulting
prior distribution over ? as the Pitman-Yor mixture (PYM) prior. All results in the figures are
generated using the prior q( ) / max(1
, 0).
4.3
Posterior inference
Posterior inference under the PYM prior amounts to computing the two-dimensional integral over
the hyperparameters (d, ?),
Z
p(x|d, ?)p(?, d)
?
HPYM = E[H|x] = E[H|x, d, ?]
d(d, ?)
(14)
p(x)
Although in practice we parametrize our prior using the variables and h, for clarity and consistency
with other literature we present results in terms of d and ?. Just as the case with the prior mean, the
posterior mean E[H|x, d, ?] is given by a convenient analytic form (derived in the Appendix),
"K
#
X
? + Kd
1
E[H|?, d, x] = 0 (? + N + 1)
d)
(ni d) 0 (ni d + 1) .
0 (1
?+N
? + N i=1
(15)
The evidence, p(x|d, ?), is given by
?Q
K
p(x|d, ?) =
l=1
1
(? + ld)
? ?Q
K
i=1
(ni
d)
d)K (? + N )
(1
6
?
(1 + ?)
.
(16)
? PYM by computing the posterior variance E[(H H
? PYM )2 |x].
We can obtain confidence regions for H
The estimate takes the same form as eq. 14, except that we substitute var[H|x, d, ?] for E[H|x, d, ?].
Although var[H|x, d, ?] has an analytic closed form that is fast to compute, it is a lengthy expression
that we do not have space to reproduce here; we provide it in the Appendix.
4.4
Computation
In practice, the two-dimensional integral over ? and d is fast to compute numerically. Computation
of the integrand can be carried out more efficiently using a representation in terms of multiplicities
(also known as the empirical histogram distribution function [4]), the number of symbols that have
occurred with a given frequency in the sample. Letting mk = |{i : ni = k}| denote the total
number of symbols with exactly k observations in the sample gives the compressed statistic m =
>
[m0 , m1 , . . . , mM ] , where nmax is the largest number of samples for any symbol. Note that the
inner product [0, 1, . . . , nmax ] ? m = N , the total number of samples.
The multiplicities representation significantly reduces the time and space complexity of our computations for most datasets, as we need only compute sums and products involving the number symbols
with distinct frequencies (at most nmax ), rather than the total number of symbols K. In practice, we
compute all expressions not explicitly involving ? using the multiplicities representation. For instance, in terms of the multiplicities, the evidence takes the compressed form
QK 1
?m i
M ?
(1 + ?) l=1 (? + ld) Y
(i d)
M!
p(x|d, ?) = p(m1 , . . . , mM |d, ?) =
. (17)
(? + n)
i! (1 d)
mi !
i=1
4.5
Existence of posterior mean
Given that the PYM prior with p(h) / 1 on [0, 1] is improper, the prior expectation E[H] does
not exist. It is therefore reasonable to ask what conditions on the data are sufficient to obtain finite
posterior expectation E[H|x]. We give an answer to this question in the following short proposition,
the proof of which we provide in Appendix B.
Theorem 1. Given a fixed dataset x of N samples and any bounded (potentially improper) prior
? PYM < 1 when N K 2.
p( , h), H
This result says that the BLS entropy estimate is finite whenever there are at least two ?coincidences?, i.e., two fewer unique symbols than samples, even though the prior expectation is infinite.
5
Results
We compare PYM to other proposed entropy estimators using four example datasets in Fig. 5. The
Miller-Maddow estimator is a well-known method for bias correction based on a first-order Taylor
expansion of the entropy functional. The CAE (?Coverage Adjusted Estimator?) addresses bias
by combining the Horvitz-Thompson estimator with a nonparametric estimate of the proportion
of total probability mass (the ?coverage?) accounted for by the observed data x [17, 25]. When
d = 0, PYM becomes a DP mixture (DPM). It may also be thought of as NSB with a very large
A, and indeed the empirical performance of NSB with large A is nearly identical to that of DPM.
? nsb , the asymptotic extension of NSB discussed in
All estimators appear to converge except H
1
Section 2.2, which increases unboundedly with data size. In addition PYM performs competitively
with other estimators. Note that unlike frequentist estimators, PYM error bars in Fig. 5 arise from
direct compution of the posterior variance of the entropy.
6
Discussion
In this paper we introduced PYM, a novel entropy estimator for distributions with unknown support.
We derived analytic forms for the conditional mean and variance of entropy under a DP and PY
prior for fixed parameters. Inspired by the work of [6], we defined a novel PY mixture prior, PYM,
which implies an approximately flat prior on entropy. PYM addresses two major issues with NSB:
its dependence on knowledge of A and its inability (inherited from the Dirichlet distribution) to
7
A
1.6
Entropy (nats)
B
plugin
MiMa
DPM
PYM
CAE
NSB
Exponential distribution
1.8
1.4
2.2
2
1.8
1.6
1.2
1.4
1
1.2
0.8
1
0.6
0.8
0.6
10
20
40
C
90
200
10
D
Moby Dick words
7.5
300
10000
Retinal Ganglion Cell Spike Trains
4
3.8
7
Entropy (nats)
60
3.6
6.5
3.4
3.2
6
3
5.5
2.8
5
2.6
4.5
2.4
# of samples
100
1600
18000
# of samples
20
210000
90
400
10000
Figure 5: Convergence of entropy estimators with sample size, on two simulated and two real datasets.
? nsb1 . Note that DPM (?DP mixture?) is
We write ?MiMa? for ?Miller-Maddow? and ?NSB1 ? for H
simply a PYM with = 0. Credible intervals are indicated by two standard deviation of the posterior
for DPM and PYM. (A) Exponential distribution ?i / e i . (B) Power law distribution with exponent 2
(?i / i 2 ). (C) Word frequency from the novel Moby Dick. (D) Neural words from 8 simultaneously? nsb1 has been cropped from B and D. All plots
recorded retinal ganglion cells. Note that for clarity H
are average of 16 Monte Carlo runs.
account for the heavy-tailed distributions which abound in biological and other natural data. We
have shown that PYM performs well in comparison to other entropy estimators, and indicated its
practicality in example applications to data.
We note, however, that despite its strong performance in simulation and in many practical examples,
we cannot assure that PYM will always be well-behaved. There may be specific distributions for
which the PYM estimate is so heavily biased that the credible intervals fail to bracket the true entropy. This reflects a general state of affairs for entropy estimation on countable distributions: any
convergence rate result must depend on restricting to a subclass of distributions [26]. Rather than
working within some analytically-defined subclass of discrete distributions (such as, for instance,
those with finite ?entropy variance? [17]), we work within the space of distributions parametrized
by PY which spans both the exponential and power-law tail distributions. Although PY parameterizes a large class of distributions, its structure allows us to use the PY parameters to understand the
qualitative features of the distributions made likely under a choice of prior. We feel this is a key
feature for small-sample inference, where the choice of prior is most relevant. Moreover, in a forthcoming paper, we demonstrate the consistency of PYM, and show that its small-sample flexibility
does not sacrifice desirable asymptotic properties.
In conclusion, we have defined the PYM prior through a reparametrization that assures an approximately flat prior on entropy. Moreover, although parametrized over the space of countably-infinite
discrete distributions, the computation of PYM depends primarily on the first two conditional moments of entropy under PY. We derive closed-form expressions for these moments that are fast to
compute, and allow the efficient computation of both the PYM estimate and its posterior credible
interval. As we demonstrate in application to data, PYM is competitive with previously proposed
estimators, and is especially well-suited to neural applications, where heavy-tailed distributions are
commonplace.
8
Acknowledgments
We thank E. J. Chichilnisky, A. M. Litke, A. Sher and J. Shlens for retinal data, and Y. .W. Teh for helpful
comments on the manuscript. This work was supported by a Sloan Research Fellowship, McKnight Scholar?s
Award, and NSF CAREER Award IIS-1150186 (JP).
References
[1] G. Miller. Note on the bias of information estimates. Information theory in psychology: Problems and
methods, 2:95?100, 1955.
[2] S. Panzeri and A. Treves. Analytical estimates of limited sampling biases in different information measures. Network: Computation in Neural Systems, 7:87?107, 1996.
[3] R. Strong, S. Koberle, de Ruyter van Steveninck R., and W. Bialek. Entropy and information in neural
spike trains. Physical Review Letters, 80:197?202, 1998.
[4] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15:1191?1253, 2003.
[5] P. Grassberger. Entropy estimates from insufficient samplings. arXiv preprint, January 2008,
arXiv:0307138 [physics].
[6] I. Nemenman, F. Shafee, and W. Bialek. Entropy and inference, revisited. Adv. Neur. Inf. Proc. Sys., 14,
2002.
[7] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. The Annals of Probability, 25(2):855?900, 1997.
[8] H. Ishwaran and L. James. Generalized weighted chinese restaurant processes for species sampling mixture models. Statistica Sinica, 13(4):1211?1236, 2003.
[9] S. Goldwater, T. Griffiths, and M. Johnson. Interpolating between types and tokens by estimating powerlaw generators. Adv. Neur. Inf. Proc. Sys., 18:459, 2006.
[10] G. Zipf. Human behavior and the principle of least effort. Addison-Wesley Press, 1949.
[11] T. Dudok de Wit. When do finite sample effects significantly affect entropy estimates? Eur. Phys. J. B Cond. Matter and Complex Sys., 11(3):513?516, October 1999.
[12] M. Newman. Power laws, Pareto distributions and Zipf?s law. Contemporary physics, 46(5):323?351,
2005.
[13] M. Hutter. Distribution of mutual information. Adv. Neur. Inf. Proc. Sys., 14:399, 2002.
[14] J. Hausser and K. Strimmer. Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks. The Journal of Machine Learning Research, 10:1469?1484, 2009.
[15] I. Nemenman, W. Bialek, and R. Van Steveninck. Entropy and information in neural spike trains: Progress
on the sampling problem. Physical Review E, 69(5):056111, 2004.
[16] I. Nemenman. Coincidences and estimation of entropies of random variables with large cardinalities.
Entropy, 13(12):2013?2023, 2011.
[17] V. Q. Vu, B. Yu, and R. E. Kass. Coverage-adjusted entropy estimation. Statistics in medicine,
26(21):4039?4060, 2007.
[18] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, and E. P. Chichilnisky, E. J. Simoncelli. Nature,
454:995?999, 2008.
[19] H. Ishwaran and M. Zarepour. Exact and approximate sum representations for the Dirichlet process.
Canadian Journal of Statistics, 30(2):269?283, 2002.
[20] J. Kingman. Random discrete distributions. Journal of the Royal Statistical Society. Series B (Methodological), 37(1):1?22, 1975.
[21] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American
Statistical Association, 96(453):161?173, March 2001.
[22] Y. Teh. A hierarchical Bayesian language model based on Pitman-Yor processes. Proceedings of the 21st
International Conference on Computational Linguistics and the 44th annual meeting of the Association
for Computational Linguistics, pages 985?992, 2006.
[23] M. Perman, J. Pitman, and M. Yor. Size-biased sampling of poisson point processes and excursions.
Probability Theory and Related Fields, 92(1):21?39, March 1992.
[24] J. Pitman. Random discrete distributions invariant under size-biased permutation. Advances in Applied
Probability, pages 525?539, 1996.
[25] A. Chao and T. Shen. Nonparametric estimation of Shannon?s index of diversity when there are unseen
species in sample. Environmental and Ecological Statistics, 10(4):429?443, 2003.
[26] A. Antos and I. Kontoyiannis. Convergence properties of functional estimates for discrete distributions.
Random Structures & Algorithms, 19(3-4):163?193, 2001.
[27] D. Wolpert and D. Wolf. Estimating functions of probability distributions from a finite set of samples.
Physical Review E, 52(6):6841?6854, 1995.
9
| 4521 |@word illustrating:1 proportion:1 grey:1 simulation:1 ld:2 moment:5 series:1 horvitz:1 ka:1 written:3 must:1 grassberger:1 numerical:1 informative:1 noninformative:1 analytic:5 compution:1 plot:1 fewer:2 selected:1 leaf:1 affair:1 sys:4 short:1 provides:4 revisited:1 mathematical:2 direct:2 beta:4 qualitative:1 manner:1 introduce:3 sacrifice:1 indeed:1 expected:8 behavior:3 p1:3 inspired:1 decreasing:1 cardinality:1 becomes:1 abound:1 estimating:5 notation:1 bounded:1 moreover:2 mass:2 what:1 transformation:1 binarized:1 subclass:3 exactly:1 wrong:1 stick:8 control:1 appear:1 engineering:1 limit:2 severely:1 consequence:1 plugin:3 despite:1 id:1 approximately:10 abuse:1 might:1 black:1 resembles:1 suggests:1 limited:1 range:1 steveninck:2 unique:2 practical:1 acknowledgment:1 atomic:1 practice:3 vu:1 procedure:1 evan:1 empirical:4 significantly:2 thought:1 convenient:1 word:7 confidence:3 refers:1 griffith:1 nmax:3 cannot:1 twodimensional:1 seminal:1 py:31 deterministic:1 center:1 thompson:1 shen:1 wit:1 powerlaw:1 estimator:23 shlens:2 analogous:1 feel:1 annals:1 heavily:1 exact:1 pa:1 assure:1 observed:6 bottom:1 preprint:1 coincidence:2 capture:3 commonplace:1 region:1 improper:3 adv:3 decrease:1 trade:1 contemporary:1 substantial:1 transforming:1 complexity:1 nats:5 depend:1 technically:1 division:1 basis:1 cae:2 joint:1 alphabet:6 train:3 distinct:3 fast:4 shortcoming:1 monte:1 newman:1 hyper:1 whose:1 say:1 compressed:2 melville:1 statistic:7 unseen:1 itself:1 sequence:1 analytical:1 product:2 remainder:1 relevant:1 combining:1 mixing:3 poorly:2 flexibility:1 convergence:3 extending:1 unboundedly:2 produce:1 derive:4 polya:1 progress:1 eq:8 strong:2 p2:2 coverage:3 involves:1 implies:3 indicate:2 stochastic:1 human:1 scholar:1 proposition:2 biological:2 adjusted:2 extension:2 strictly:1 correction:1 mm:2 around:1 exp:1 panzeri:1 m0:1 major:2 estimation:14 proc:3 largest:1 city:1 reflects:1 weighted:1 always:1 rather:2 factorizes:1 derived:3 focus:4 methodological:1 likelihood:4 digamma:2 litke:2 sense:1 helpful:1 inference:8 archer:1 transformed:1 reproduce:1 issue:1 among:1 denoted:1 exponent:2 integration:1 special:1 mutual:2 equal:2 construct:4 field:1 atom:3 zz:2 sampling:7 identical:1 park:1 yu:1 nearly:1 simplex:3 employ:2 primarily:1 pyp:12 simultaneously:2 polygamma:1 n1:1 freedom:1 nemenman:7 highly:2 severe:2 mixture:15 bracket:1 strimmer:1 antos:1 integral:3 taylor:1 theoretical:2 mk:1 hutter:1 instance:2 tractability:1 introducing:1 deviation:4 perman:1 uniform:2 johnson:1 dependency:1 answer:2 dir:4 params:2 confident:1 eur:1 density:1 st:1 international:1 kontoyiannis:1 off:1 physic:2 together:1 iy:1 dirichlets:1 recorded:2 possibly:1 corner:1 american:1 derivative:1 leading:1 kingman:1 account:1 de:2 diversity:1 retinal:4 includes:1 matter:1 explicitly:2 sloan:1 depends:1 script:1 closed:3 observing:1 red:2 competitive:1 bayes:1 reparametrization:1 inherited:1 memming:1 trigamma:1 il:1 square:2 ni:9 variance:9 qk:1 efficiently:1 miller:3 yield:2 identify:1 goldwater:1 bayesian:15 accurately:1 iid:1 carlo:1 served:1 j6:1 phys:1 tended:1 whenever:1 lengthy:1 frequency:8 pp:1 james:3 obvious:1 proof:2 mi:1 sampled:8 dataset:2 ask:1 knowledge:1 credible:7 organized:1 appears:1 manuscript:1 pym:27 wesley:1 follow:1 response:1 shrink:1 strongly:2 generality:1 though:1 just:1 working:1 reparametrizing:1 nonlinear:1 gray:1 reveal:1 scientific:1 indicated:3 grows:1 behaved:1 building:1 name:1 effect:1 zarepour:1 unbiased:1 true:2 analytically:2 i2:1 attractive:1 subordinator:1 noted:1 m:1 generalized:1 plate:1 demonstrate:2 performs:3 meaning:1 novel:5 multinomial:1 spiking:1 physical:4 overview:1 pdir:2 functional:2 exponentially:1 jp:1 tail:10 occurred:1 m1:2 relating:1 numerically:1 discussed:1 association:3 refer:3 gibbs:1 zipf:2 consistency:2 language:3 stable:1 etc:1 base:3 posterior:22 showed:1 inf:3 ecological:1 arbitrarily:1 meeting:1 integrable:1 impose:1 converge:1 signal:1 ii:1 multiple:1 desirable:2 mix:1 infer:1 reduces:2 simoncelli:1 post:2 equally:1 award:2 plugging:1 involving:2 basic:1 expectation:7 poisson:3 arxiv:2 histogram:1 cell:4 addition:1 unrealistically:1 cropped:1 fellowship:1 interval:7 biased:9 extra:1 unlike:1 comment:1 induced:4 dpm:5 seem:1 near:2 canadian:1 xj:2 restaurant:1 psychology:2 forthcoming:1 affect:1 inner:1 idea:1 parameterizes:1 tradeoff:1 texas:1 expression:4 effort:1 e3:1 amount:1 nonparametric:3 discount:1 stein:1 concentrated:1 induces:1 exist:1 nsf:1 estimated:1 overly:1 delta:2 blue:2 discrete:15 write:1 shall:1 bls:3 express:1 key:3 four:1 terminology:1 drawn:3 clarity:2 asymptotically:1 sum:3 convert:1 run:1 inverse:2 letter:1 uncertainty:3 place:1 family:3 reasonable:2 excursion:1 p3:2 appendix:5 annual:1 binned:1 flat:8 integrand:1 extremely:1 formulating:1 span:1 according:1 neur:3 mcknight:1 march:2 kd:5 making:1 invariant:1 multiplicity:4 conjugacy:1 remains:1 previously:2 discus:1 count:2 fail:1 assures:1 addison:1 letting:2 tractable:2 generalizes:1 parametrize:1 competitively:1 apply:1 ishwaran:3 hierarchical:1 frequentist:1 existence:1 substitute:1 denotes:10 dirichlet:17 linguistics:2 graphical:1 medicine:1 practicality:1 especially:1 chinese:1 society:1 implied:2 question:1 spike:5 parametric:1 concentration:3 strategy:1 dependence:1 bialek:4 exhibit:4 dp:23 thank:1 simulated:1 parametrized:2 extent:2 toward:1 index:1 insufficient:1 dick:4 difficult:1 unfortunately:1 sinica:1 october:1 potentially:1 negative:1 countable:1 unknown:6 perform:2 contributed:1 teh:2 neuron:1 observation:1 datasets:7 finite:12 incorrectly:1 january:1 arbitrary:1 treves:1 nsb:10 introduced:2 crank:1 required:2 chichilnisky:2 hausser:1 narrow:7 qa:1 address:4 suggested:1 bar:1 regime:5 moby:4 herman:1 max:1 royal:1 belief:1 power:10 natural:6 undersampled:2 turning:1 carried:1 categorical:1 sher:2 koberle:1 chao:1 prior:84 review:4 literature:1 asymptotic:2 law:11 loss:1 permutation:1 proportional:1 var:3 ingredient:1 generator:1 degree:1 sufficient:1 imposes:1 principle:1 pareto:1 heavy:5 austin:1 accounted:1 supported:1 token:1 copy:2 bias:8 allow:2 understand:1 institute:1 pitman:15 yor:14 van:2 pillow:1 author:1 commonly:1 made:1 social:1 functionals:1 approximate:1 ignore:1 countably:8 gene:1 ml:1 gem:1 xi:1 continuous:1 tailed:4 nature:1 ruyter:1 career:1 expansion:1 interpolating:1 complex:1 constructing:1 pk:2 statistica:1 arrow:1 arise:2 hyperparameters:1 n2:1 fig:7 wish:1 deterministically:1 exponential:5 perceptual:1 breaking:8 formula:1 theorem:1 specific:1 symbol:12 shafee:2 decay:1 virtue:1 evidence:3 restricting:1 nk:3 easier:1 suited:2 entropy:67 wolpert:1 simply:2 explore:2 likely:1 ganglion:3 paninski:2 wolf:1 determines:1 environmental:1 dh:1 cdf:1 conditional:4 viewed:1 sorted:1 infinite:14 determined:1 reducing:1 except:2 lemma:1 called:4 total:4 specie:2 invariance:1 shannon:2 cond:1 exception:1 e6:1 support:1 inability:1 jonathan:1 dept:1 phenomenon:2 |
3,891 | 4,522 | Practical Bayesian Optimization of Machine
Learning Algorithms
Jasper Snoek
Department of Computer Science
University of Toronto
[email protected]
Hugo Larochelle
Department of Computer Science
University of Sherbrooke
[email protected]
Ryan P. Adams
School of Engineering and Applied Sciences
Harvard University
[email protected]
Abstract
The use of machine learning algorithms frequently involves careful tuning of
learning parameters and model hyperparameters. Unfortunately, this tuning is often a ?black art? requiring expert experience, rules of thumb, or sometimes bruteforce search. There is therefore great appeal for automatic approaches that can
optimize the performance of any given learning algorithm to the problem at hand.
In this work, we consider this problem through the framework of Bayesian optimization, in which a learning algorithm?s generalization performance is modeled
as a sample from a Gaussian process (GP). We show that certain choices for the
nature of the GP, such as the type of kernel and the treatment of its hyperparameters, can play a crucial role in obtaining a good optimizer that can achieve expertlevel performance. We describe new algorithms that take into account the variable
cost (duration) of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed
algorithms improve on previous automatic procedures and can reach or surpass
human expert-level optimization for many algorithms including latent Dirichlet
allocation, structured SVMs and convolutional neural networks.
1
Introduction
Machine learning algorithms are rarely parameter-free: parameters controlling the rate of learning
or the capacity of the underlying model must often be specified. These parameters are often considered nuisances, making it appealing to develop machine learning algorithms with fewer of them.
Another, more flexible take on this issue is to view the optimization of such parameters as a procedure to be automated. Specifically, we could view such tuning as the optimization of an unknown
black-box function and invoke algorithms developed for such problems. A good choice is Bayesian
optimization [1], which has been shown to outperform other state of the art global optimization
algorithms on a number of challenging optimization benchmark functions [2]. For continuous functions, Bayesian optimization typically works by assuming the unknown function was sampled from
a Gaussian process and maintains a posterior distribution for this function as observations are made
or, in our case, as the results of running learning algorithm experiments with different hyperparameters are observed. To pick the hyperparameters of the next experiment, one can optimize the
expected improvement (EI) [1] over the current best result or the Gaussian process upper confidence
bound (UCB)[3]. EI and UCB have been shown to be efficient in the number of function evaluations
required to find the global optimum of many multimodal black-box functions [4, 3].
1
Machine learning algorithms, however, have certain characteristics that distinguish them from other
black-box optimization problems. First, each function evaluation can require a variable amount of
time: training a small neural network with 10 hidden units will take less time than a bigger network with 1000 hidden units. Even without considering duration, the advent of cloud computing
makes it possible to quantify economically the cost of requiring large-memory machines for learning, changing the actual cost in dollars of an experiment with a different number of hidden units.
Second, machine learning experiments are often run in parallel, on multiple cores or machines. In
both situations, the standard sequential approach of GP optimization can be suboptimal.
In this work, we identify good practices for Bayesian optimization of machine learning algorithms.
We argue that a fully Bayesian treatment of the underlying GP kernel is preferred to the approach
based on optimization of the GP hyperparameters, as previously proposed [5]. Our second contribution is the description of new algorithms for taking into account the variable and unknown cost of
experiments or the availability of multiple cores to run experiments in parallel.
Gaussian processes have proven to be useful surrogate models for computer experiments and good
practices have been established in this context for sensitivity analysis, calibration and prediction [6].
While these strategies are not considered in the context of optimization, they can be useful to researchers in machine learning who wish to understand better the sensitivity of their models to various
hyperparameters. Hutter et al. [7] have developed sequential model-based optimization strategies for
the configuration of satisfiability and mixed integer programming solvers using random forests. The
machine learning algorithms we consider, however, warrant a fully Bayesian treatment as their expensive nature necessitates minimizing the number of evaluations. Bayesian optimization strategies
have also been used to tune the parameters of Markov chain Monte Carlo algorithms [8]. Recently,
Bergstra et al. [5] have explored various strategies for optimizing the hyperparameters of machine
learning algorithms. They demonstrated that grid search strategies are inferior to random search [9],
and suggested the use of Gaussian process Bayesian optimization, optimizing the hyperparameters
of a squared-exponential covariance, and proposed the Tree Parzen Algorithm.
2
Bayesian Optimization with Gaussian Process Priors
As in other kinds of optimization, in Bayesian optimization we are interested in finding the minimum of a function f (x) on some bounded set X , which we will take to be a subset of RD . What
makes Bayesian optimization different from other procedures is that it constructs a probabilistic
model for f (x) and then exploits this model to make decisions about where in X to next evaluate
the function, while integrating out uncertainty. The essential philosophy is to use all of the information available from previous evaluations of f (x) and not simply rely on local gradient and Hessian
approximations. This results in a procedure that can find the minimum of difficult non-convex functions with relatively few evaluations, at the cost of performing more computation to determine the
next point to try. When evaluations of f (x) are expensive to perform ? as is the case when it
requires training a machine learning algorithm ? then it is easy to justify some extra computation
to make better decisions. For an overview of the Bayesian optimization formalism and a review of
previous work, see, e.g., Brochu et al. [10]. In this section we briefly review the general Bayesian
optimization approach, before discussing our novel contributions in Section 3.
There are two major choices that must be made when performing Bayesian optimization. First, one
must select a prior over functions that will express assumptions about the function being optimized.
For this we choose the Gaussian process prior, due to its flexibility and tractability. Second, we
must choose an acquisition function, which is used to construct a utility function from the model
posterior, allowing us to determine the next point to evaluate.
2.1
Gaussian Processes
The Gaussian process (GP) is a convenient and powerful prior distribution on functions, which we
will take here to be of the form f : X ? R. The GP is defined by the property that any finite set of N
N
points {xn ? X }N
n=1 induces a multivariate Gaussian distribution on R . The nth of these points
is taken to be the function value f (xn ), and the elegant marginalization properties of the Gaussian
distribution allow us to compute marginals and conditionals in closed form. The support and properties of the resulting distribution on functions are determined by a mean function m : X ? R and
a positive definite covariance function K : X ? X ? R. We will discuss the impact of covariance
functions in Section 3.1. For an overview of Gaussian processes, see Rasmussen and Williams [11].
2
2.2
Acquisition Functions for Bayesian Optimization
We assume that the function f (x) is drawn from a Gaussian process prior and that our observations are of the form {xn , yn }N
n=1 , where yn ? N (f (xn ), ?) and ? is the variance of noise introduced into the function observations. This prior and these data induce a posterior over functions;
the acquisition function, which we denote by a : X ? R+ , determines what point in X should be
evaluated next via a proxy optimization xnext = argmaxx a(x), where several different functions
have been proposed. In general, these acquisition functions depend on the previous observations,
as well as the GP hyperparameters; we denote this dependence as a(x ; {xn , yn }, ?). There are
several popular choices of acquisition function. Under the Gaussian process prior, these functions
depend on the model solely through its predictive mean function ?(x ; {xn , yn }, ?) and predictive
variance function ? 2 (x ; {xn , yn }, ?). In the proceeding, we will denote the best current value
as xbest = argminxn f (xn ) and the cumulative distribution function of the standard normal as ?(?).
Probability of Improvement One intuitive strategy is to maximize the probability of improving
over the best current value [12]. Under the GP this can be computed analytically as
f (xbest ) ? ?(x ; {xn , yn }, ?)
aPI (x ; {xn , yn }, ?) = ?(?(x)),
?(x) =
.
(1)
?(x ; {xn , yn }, ?)
Expected Improvement Alternatively, one could choose to maximize the expected improvement
(EI) over the current best. This also has closed form under the Gaussian process:
aEI (x ; {xn , yn }, ?) = ?(x ; {xn , yn }, ?) (?(x) ?(?(x)) + N (?(x) ; 0, 1))
(2)
GP Upper Confidence Bound A more recent development is the idea of exploiting lower confidence bounds (upper, when considering maximization) to construct acquisition functions that minimize regret over the course of their optimization [3]. These acquisition functions have the form
aLCB (x ; {xn , yn }, ?) = ?(x ; {xn , yn }, ?) ? ? ?(x ; {xn , yn }, ?),
(3)
with a tunable ? to balance exploitation against exploration.
In this work we will focus on the EI criterion, as it has been shown to be better-behaved than
probability of improvement, but unlike the method of GP upper confidence bounds (GP-UCB), it
does not require its own tuning parameter. Although the EI algorithm performs well in minimization
problems, we wish to note that the regret formalization may be more appropriate in some settings.
We perform a direct comparison between our EI-based approach and GP-UCB in Section 4.1.
3
Practical Considerations for Bayesian Optimization of Hyperparameters
Although an elegant framework for optimizing expensive functions, there are several limitations
that have prevented it from becoming a widely-used technique for optimizing hyperparameters in
machine learning problems. First, it is unclear for practical problems what an appropriate choice is
for the covariance function and its associated hyperparameters. Second, as the function evaluation
itself may involve a time-consuming optimization procedure, problems may vary significantly in
duration and this should be taken into account. Third, optimization algorithms should take advantage
of multi-core parallelism in order to map well onto modern computational environments. In this
section, we propose solutions to each of these issues.
3.1
Covariance Functions and Treatment of Covariance Hyperparameters
The power of the Gaussian process to express a rich distribution on functions rests solely on the
shoulders of the covariance function. While non-degenerate covariance functions correspond to
infinite bases, they nevertheless can correspond to strong assumptions regarding likely functions. In
particular, the automatic relevance determination (ARD) squared exponential kernel
D
X
1 2
0
0
r2 (x, x0 ) =
(xd ? x0d )2 /?d2 .
(4)
KSE (x, x ) = ?0 exp ? r (x, x )
2
d=1
is often a default choice for Gaussian process regression. However, sample functions with this covariance function are unrealistically smooth for practical optimization problems. We instead propose
the use of the ARD Mat?ern 5/2 kernel:
n p
o
p
5 2
0
0
2
0
KM52 (x, x ) = ?0 1 + 5r (x, x ) + r (x, x ) exp ? 5r2 (x, x0 ) .
(5)
3
3
(a) Posterior samples under varying hyperparameters
(a) Posterior samples after three data
(b) Expected improvement under varying hyperparameters
(b) Expected improvement under three fantasies
(c) Integrated expected improvement
(c) Expected improvement across fantasies
Figure 1: Illustration of integrated expected improvement. (a) Three posterior samples are shown, each
with different length scales, after the same five observations. (b) Three expected improvement acquisition
functions, with the same data and hyperparameters.
The maximum of each is shown. (c) The integrated
expected improvement, with its maximum shown.
Figure 2: Illustration of the acquisition with pending evaluations. (a) Three data have been observed
and three posterior functions are shown, with ?fantasies? for three pending evaluations. (b) Expected improvement, conditioned on the each joint fantasy of the
pending outcome. (c) Expected improvement after integrating over the fantasy outcomes.
This covariance function results in sample functions which are twice-differentiable, an assumption
that corresponds to those made by, e.g., quasi-Newton methods, but without requiring the smoothness of the squared exponential.
After choosing the form of the covariance, we must also manage the hyperparameters that govern its
behavior (Note that these ?hyperparameters? are distinct from those being subjected to the overall
Bayesian optimization.), as well as that of the mean function. For our problems of interest, typically
we would have D + 3 Gaussian process hyperparameters: D length scales ?1:D , the covariance
amplitude ?0 , the observation noise ?, and a constant mean m. The most commonly advocated approach is to use a point estimate of these parameters by optimizing the marginal likelihood under the
T
Gaussian process, p(y | {xn }N
n=1 , ?, ?, m) = N (y | m1, ?? + ?I), where y = [y1 , y2 , ? ? ? , yN ] ,
and ?? is the covariance matrix resulting from the N input points under the hyperparameters ?.
However, for a fully-Bayesian treatment of hyperparameters (summarized here by ? alone), it is
desirable to marginalize over hyperparameters and compute the integrated acquisition function:
Z
a
?(x ; {xn , yn }) = a(x ; {xn , yn }, ?) p(? | {xn , yn }N
(6)
n=1 ) d?,
where a(x) depends on ? and all of the observations. For probability of improvement and EI, this
expectation is the correct generalization to account for uncertainty in hyperparameters. We can
therefore blend acquisition functions arising from samples from the posterior over GP hyperparameters and have a Monte Carlo estimate of the integrated expected improvement. These samples can
be acquired efficiently using slice sampling, as described in Murray and Adams [13]. As both optimization and Markov chain Monte Carlo are computationally dominated by the cubic cost of solving
an N -dimensional linear system (and our function evaluations are assumed to be much more expensive anyway), the fully-Bayesian treatment is sensible and our empirical evaluations bear this out.
Figure 1 shows how the integrated expected improvement changes the acquistion function.
3.2
Modeling Costs
Ultimately, the objective of Bayesian optimization is to find a good setting of our hyperparameters
as quickly as possible. Greedy acquisition procedures such as expected improvement try to make
4
the best progress possible in the next function evaluation. From a practial point of view, however,
we are not so concerned with function evaluations as with wallclock time. Different regions of
the parameter space may result in vastly different execution times, due to varying regularization,
learning rates, etc. To improve our performance in terms of wallclock time, we propose optimizing
with the expected improvement per second, which prefers to acquire points that are not only likely
to be good, but that are also likely to be evaluated quickly. This notion of cost can be naturally
generalized to other budgeted resources, such as reagents or money.
Just as we do not know the true objective function f (x), we also do not know the duration function c(x) : X ? R+ . We can nevertheless employ our Gaussian process machinery to model ln c(x)
alongside f (x). In this work, we assume that these functions are independent of each other, although
their coupling may be usefully captured using GP variants of multi-task learning (e.g., [14, 15]).
Under the independence assumption, we can easily compute the predicted expected inverse duration
and use it to compute the expected improvement per second as a function of x.
3.3
Monte Carlo Acquisition for Parallelizing Bayesian Optimization
With the advent of multi-core computing, it is natural to ask how we can parallelize our Bayesian
optimization procedures. More generally than simply batch parallelism, however, we would like to
be able to decide what x should be evaluated next, even while a set of points are being evaluated.
Clearly, we cannot use the same acquisition function again, or we will repeat one of the pending
experiments. Ideally, we could perform a roll-out of our acquisition policy, to choose a point that
appropriately balanced information gain and exploitation. However, such roll-outs are generally
intractable. Instead we propose a sequential strategy that takes advantage of the tractable inference
properties of the Gaussian process to compute Monte Carlo estimates of the acquisiton function
under different possible results from pending function evaluations.
Consider the situation in which N evaluations have completed, yielding data {xn , yn }N
n=1 , and in
which J evaluations are pending at locations {xj }Jj=1 . Ideally, we would choose a new point based
on the expected acquisition function under all possible outcomes of these pending evaluations:
a
?(x ; {xn , yn }, ?, {xj }) =
Z
a(x ; {xn , yn }, ?, {xj , yj }) p({yj }Jj=1 | {xj }Jj=1 , {xn , yn }N
n=1 ) dy1 ? ? ? dyJ . (7)
RJ
This is simply the expectation of a(x) under a J-dimensional Gaussian distribution, whose mean and
covariance can easily be computed. As in the covariance hyperparameter case, it is straightforward
to use samples from this distribution to compute the expected acquisition and use this to select the
next point. Figure 2 shows how this procedure would operate with queued evaluations. We note that
a similar approach is touched upon briefly by Ginsbourger and Riche [16], but they view it as too
intractable to warrant attention. We have found our Monte Carlo estimation procedure to be highly
effective in practice, however, as will be discussed in Section 4.
4
Empirical Analyses
In this section, we empirically analyse1 the algorithms introduced in this paper and compare to existing strategies and human performance on a number of challenging machine learning problems.
We refer to our method of expected improvement while marginalizing GP hyperparameters as ?GP
EI MCMC?, optimizing hyperparameters as ?GP EI Opt?, EI per second as ?GP EI per Second?, and
N times parallelized GP EI MCMC as ?N x GP EI MCMC?. Each results figure plots the progression of minxn f (xn ) over the number of function evaluations or time, averaged over multiple runs
of each algorithm. If not specified otherwise, xnext = argmaxx a(x) is computed using gradientbased search with multiple restarts (see supplementary material for details). The code used is made
publicly available at http://www.cs.toronto.edu/?jasper/software.html.
4.1
Branin-Hoo and Logistic Regression
We first compare to standard approaches and the recent Tree Parzen Algorithm2 (TPA) of Bergstra
et al. [5] on two standard problems. The Branin-Hoo function is a common benchmark for Bayesian
1
2
All experiments were conducted on identical machines using the Amazon EC2 service.
Using the publicly available code from https://github.com/jaberg/hyperopt/wiki
5
GP EI MCMC
GP EI Opt
GP EI per Sec
Tree Parzen Algorithm
25
20
15
10
0.2
GP EI MCMC
GP EI per Second
0.18
Min Function Value
0.22
Min Function Value
Min Function Value
30
0.2
0.24
GP EI Opt
GP EI MCMC
GP?UCB
TPA
35
0.18
0.16
0.14
0.12
0.16
0.14
0.12
0.1
0.1
5
0.08
0.08
0
0
10
20
30
Function evaluations
40
50
0
20
40
60
Function Evaluations
80
5
100
10
15
20
25
Minutes
30
35
40
45
(a)
(b)
(c)
Figure 3: Comparisons on the Branin-Hoo function (3a) and training logistic regression on MNIST (3b). (3c)
shows GP EI MCMC and GP EI per Second from (3b), but in terms of time elapsed.
1350
GP EI MCMC
GP EI per second
GP EI Opt
Random Grid Search
3x GP EI MCMC
5x GP EI MCMC
10x GP EI MCMC
Min Function Value
1330
1320
1310
1330
1300
1290
1320
1310
1270
1270
1260
1260
10
20
30
Function evaluations
(a)
40
50
1330
1290
1280
3x GP EI MCMC (On grid)
5x GP EI MCMC (On grid)
3x GP EI MCMC (Off grid)
5x GP EI MCMC (Off grid)
1340
1300
1280
0
1350
GP EI MCMC
GP EI per second
GP EI Opt
3x GP EI MCMC
5x GP EI MCMC
10x GP EI MCMC
1340
Min function value
1340
Min Function Value
1350
1320
1310
1300
1290
1280
1270
0
2
4
6
Time (Days)
(b)
8
10
12
1260
0
10
20
30
Function evaluations
40
50
(c)
Figure 4: Different strategies of optimization on the Online LDA problem compared in terms of function
evaluations (4a), walltime (4b) and constrained to a grid or not (4c).
optimization techniques [2] that is defined over x ? R2 where 0 ? x1 ? 15 and ?5 ? x2 ? 15. We
also compare to TPA on a logistic regression classification task on the popular MNIST data. The
algorithm requires choosing four hyperparameters, the learning rate for stochastic gradient descent,
on a log scale from 0 to 1, the `2 regularization parameter, between 0 and 1, the mini batch size,
from 20 to 2000 and the number of learning epochs, from 5 to 2000. Each algorithm was run on the
Branin-Hoo and logistic regression problems 100 and 10 times respectively and mean and standard
error are reported. The results of these analyses are presented in Figures 3a and 3b in terms of
the number of times the function is evaluated. On Branin-Hoo, integrating over hyperparameters is
superior to using a point estimate and the GP EI significantly outperforms TPA, finding the minimum
in less than half as many evaluations, in both cases. For logistic regression, 3b and 3c show that
although EI per second is less efficient in function evaluations it outperforms standard EI in time.
4.2
Online LDA
Latent Dirichlet Allocation (LDA) is a directed graphical model for documents in which words
are generated from a mixture of multinomial ?topic? distributions. Variational Bayes is a popular
paradigm for learning and, recently, Hoffman et al. [17] proposed an online learning approach in
that context. Online LDA requires 2 learning parameters, ?0 and ?, that control the learning rate
?t = (?0 + t)?? used to update the variational parameters of LDA based on the tth minibatch of
document word count vectors. The size of the minibatch is also a third parameter that must be
chosen. Hoffman et al. [17] relied on an exhaustive grid search of size 6 ? 6 ? 8, for a total of 288
hyperparameter configurations.
We used the code made publically available by Hoffman et al. [17] to run experiments with online
LDA on a collection of Wikipedia articles. We downloaded a random set of 249 560 articles, split
into training, validation and test sets of size 200 000, 24 560 and 25 000 respectively. The documents
are represented as vectors of word counts from a vocabulary of 7702 words. As reported in Hoffman
et al. [17], we used a lower bound on the per word perplexity of the validation set documents as the
performance measure. One must also specify the number of topics and the hyperparameters ? for
the symmetric Dirichlet prior over the topic distributions and ? for the symmetric Dirichlet prior
over the per document topic mixing weights. We followed Hoffman et al. [17] and used 100 topics
and ? = ? = 0.01 in our experiments in order to emulate their analysis and repeated exactly the grid
search reported in the paper3 . Each online LDA evaluation generally took between five to ten hours
to converge, thus the grid search requires approximately 60 to 120 processor days to complete.
3
i.e. the only difference was the randomly sampled collection of articles in the data set and the choice of the
vocabulary. We ran each evaluation for 10 hours or until convergence.
6
0.255
0.25
0.245
0.26
0.28
GP EI MCMC
GP EI per Second
3x GP EI MCMC
3x GP EI per Second
0.255
0.25
0.245
Matern 52 ARD
SqExp
SqExp ARD
Matern 32 ARD
0.275
Min Function Value
GP EI MCMC
GP EI per Second
3x GP EI MCMC
3x GP EI per Second
Random Grid Search
Min Function Value
Min function value
0.26
0.27
0.265
0.26
0.255
0.25
0.245
0.24
0
5
10
15
Time (hours)
(a)
20
25
0.24
0
20
40
60
Function evaluations
80
100
(b)
0.24
0
20
40
60
Function evaluations
80
100
(c)
Figure 5: A comparison of various strategies for optimizing the hyperparameters of M3E models on the protein
motif finding task in terms of walltime (5a), function evaluations (5b) and different covariance functions(5c).
In Figures 4a and 4b we compare our various strategies of optimization over the same grid on this
expensive problem. That is, the algorithms were restricted to only the exact parameter settings as
evaluated by the grid search. Each optimization was then repeated 100 times (each time picking two
different random experiments to initialize the optimization with) and the mean and standard error
are reported4 . Figure 4c also presents a 5 run average of optimization with 3 and 5 times parallelized
GP EI MCMC, but without restricting the new parameter setting to be on the pre-specified grid (see
supplementary material for details). A comparison with their ?on grid? versions is illustrated.
Clearly integrating over hyperparameters is superior to using a point estimate in this case. While
GP EI MCMC is the most efficient in terms of function evaluations, we see that parallelized GP EI
MCMC finds the best parameters in significantly less time. Finally, in Figure 4c we see that the
parallelized GP EI MCMC algorithms find a significantly better minimum value than was found in
the grid search used by Hoffman et al. [17] while running a fraction of the number of experiments.
4.3
Motif Finding with Structured Support Vector Machines
In this example, we consider optimizing the learning parameters of Max-Margin Min-Entropy
(M3E) Models [18], which include Latent Structured Support Vector Machines [19] as a special
case. Latent structured SVMs outperform SVMs on problems where they can explicitly model
problem-dependent hidden variables. A popular example task is the binary classification of protein DNA sequences [18, 20, 19]. The hidden variable to be modeled is the unknown location of
particular subsequences, or motifs, that are indicators of positive sequences.
Setting the hyperparameters, such as the regularisation term, C, of structured SVMs remains a challenge and these are typically set through a time consuming grid search procedure as is done in
[18, 19]. Indeed, Kumar et al. [20] avoided hyperparameter selection for this task as it was too
computationally expensive. However, Miller et al. [18] demonstrate that results depend highly on
the setting of the parameters, which differ for each protein. M3E models introduce an entropy term,
parameterized by ?, which enables the model to outperform latent structured SVMs. This additional
performance, however, comes at the expense of an additional problem-dependent hyperparameter.
We emulate the experiments of Miller et al. [18] for one protein with approximately 40 000 sequences. We explore 25 settings of the parameter C, on a log scale from 10?1 to 106 , 14 settings of
?, on a log scale from 0.1 to 5 and the model convergence tolerance, ? {10?4 ,10?3 ,10?2 ,10?1 }.
We ran a grid search over the 1400 possible combinations of these parameters, evaluating each over
5 random 50-50 training and test splits.
In Figures 5a and 5b, we compare the randomized grid search to GP EI MCMC, GP EI per Second
and their 3x parallelized versions, all constrained to the same points on the grid. Each algorithm
was repeated 100 times and the mean and standard error are shown. We observe that the Bayesian
optimization strategies are considerably more efficient than grid search which is the status quo. In
this case, GP EI MCMC is superior to GP EI per Second in terms of function evaluations but GP
EI per Second finds better parameters faster than GP EI MCMC as it learns to use a less strict
convergence tolerance early on while exploring the other parameters. Indeed, 3x GP EI per second,
is the least efficient in terms of function evaluations but finds better parameters faster than all the
other algorithms. Figure 5c compares the use of various covariance functions in GP EI MCMC
optimization on this problem, again repeating the optimization 100 times. It is clear that the selection
4
The restriction of the search to the same grid was chosen for efficiency reasons: it allowed us to repeat
the experiments several times efficiently, by first computing all function evaluations over the whole grid and
reusing these values within each repeated experiment.
7
0.4
GP EI MCMC
GP EI Opt
GP EI per Second
GP EI MCMC 3x Parallel
Human Expert
0.35
GP EI MCMC
GP EI Opt
GP EI per Second
GP EI MCMC 3x Parallel
0.35
Min function value
Min Function Value
0.4
0.3
0.25
0.3
0.25
0.2
0.2
0
10
20
30
Function evaluations
40
0
50
10
20
30
40
Time (Hours)
50
60
70
Figure 6: Validation error on the CIFAR-10 data for different optimization strategies.
of an appropriate covariance significantly affects performance and the estimation of length scale
parameters is critical. The assumption of the infinite differentiability as imposed by the commonly
used squared exponential is too restrictive for this problem.
4.4
Convolutional Networks on CIFAR-10
Neural networks and deep learning methods notoriously require careful tuning of numerous hyperparameters. Multi-layer convolutional neural networks are an example of such a model for which a
thorough exploration of architechtures and hyperparameters is beneficial, as demonstrated in Saxe
et al. [21], but often computationally prohibitive. While Saxe et al. [21] demonstrate a methodology
for efficiently exploring model architechtures, numerous hyperparameters, such as regularisation
parameters, remain. In this empirical analysis, we tune nine hyperparameters of a three-layer convolutional network [22] on the CIFAR-10 benchmark dataset using the code provided 5 . This model
has been carefully tuned by a human expert [22] to achieve a highly competitive result of 18% test
error on the unaugmented data, which matches the published state of the art result [23] on CIFAR10. The parameters we explore include the number of epochs to run the model, the learning rate,
four weight costs (one for each layer and the softmax output weights), and the width, scale and
power of the response normalization on the pooling layers of the network.
We optimize over the nine parameters for each strategy on a withheld validation set and report the
mean validation error and standard error over five separate randomly initialized runs. Results are
presented in Figure 6 and contrasted with the average results achieved using the best parameters
found by the expert. The best hyperparameters found by the GP EI MCMC approach achieve an
error on the test set of 14.98%, which is over 3% better than the expert and the state of the art on
CIFAR-10. The same procedure was repeated on the CIFAR-10 data augmented with horizontal
reflections and translations, similarly improving on the expert from 11% to 9.5% test error. To our
knowledge this is the lowest error reported, compared to the 11% state of the art and a recently
published 11.21% [24] using similar methods, on the competitive CIFAR-10 benchmark.
5
Conclusion
We presented methods for performing Bayesian optimization for hyperparameter selection of general machine learning algorithms. We introduced a fully Bayesian treatment for EI, and algorithms
for dealing with variable time regimes and running experiments in parallel. The effectiveness of our
approaches were demonstrated on three challenging recently published problems spanning different
areas of machine learning. The resulting Bayesian optimization finds better hyperparameters significantly faster than the approaches used by the authors and surpasses a human expert at selecting
hyperparameters on the competitive CIFAR-10 dataset, beating the state of the art by over 3%.
Acknowledgements
The authors thank Alex Krizhevsky, Hoffman et al. [17] and Miller et al. [18] for making their code
and data available, and George Dahl for valuable feedback. This work was funded by DARPA Young
Faculty Award N66001-12-1-4219, NSERC and an Amazon AWS in Research grant.
References
[1] Jonas Mockus, Vytautas Tiesis, and Antanas Zilinskas. The application of Bayesian methods
for seeking the extremum. Towards Global Optimization, 2:117?129, 1978.
5
Available at: http://code.google.com/p/cuda-convnet/
8
[2] D.R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal
of Global Optimization, 21(4):345?383, 2001.
[3] Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process
optimization in the bandit setting: No regret and experimental design. In Proceedings of the
27th International Conference on Machine Learning, 2010.
[4] Adam D. Bull. Convergence rates of efficient global optimization algorithms. Journal of
Machine Learning Research, (3-4):2879?2904, 2011.
[5] James S. Bergstra, R?emi Bardenet, Yoshua Bengio, and B?al?azs K?egl. Algorithms for hyperparameter optimization. In Advances in Neural Information Processing Systems 25. 2011.
[6] Marc C. Kennedy and Anthony O?Hagan. Bayesian calibration of computer models. Journal
of the Royal Statistical Society: Series B (Statistical Methodology), 63(3), 2001.
[7] Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Sequential model-based optimization
for general algorithm configuration. In Learning and Intelligent Optimization 5, 2011.
[8] Nimalan Mahendran, Ziyu Wang, Firas Hamze, and Nando de Freitas. Adaptive mcmc with
bayesian optimization. In AISTATS, 2012.
[9] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal
of Machine Learning Research, 13:281?305, 2012.
[10] Eric Brochu, Vlad M. Cora, and Nando de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement
learning. pre-print, 2010. arXiv:1012.2599.
[11] Carl E. Rasmussen and Christopher Williams. Gaussian Processes for Machine Learning. MIT
Press, 2006.
[12] H. J. Kushner. A new method for locating the maximum point of an arbitrary multipeak curve
in the presence of noise. Journal of Basic Engineering, 86, 1964.
[13] Iain Murray and Ryan P. Adams. Slice sampling covariance hyperparameters of latent Gaussian
models. In Advances in Neural Information Processing Systems 24, pages 1723?1731. 2010.
[14] Yee Whye Teh, Matthias Seeger, and Michael I. Jordan. Semiparametric latent factor models.
In AISTATS, 2005.
[15] Edwin V. Bonilla, Kian Ming A. Chai, and Christopher K. I. Williams. Multi-task Gaussian
process prediction. In Advances in Neural Information Processing Systems 22, 2008.
[16] David Ginsbourger and Rodolphe Le Riche. Dealing with asynchronicity in parallel Gaussian process based global optimization. http://hal.archives-ouvertes.fr/
hal-00507632, 2010.
[17] Matthew Hoffman, David M. Blei, and Francis Bach. Online learning for latent Dirichlet
allocation. In Advances in Neural Information Processing Systems 24, 2010.
[18] Kevin Miller, M. Pawan Kumar, Benjamin Packer, Danny Goodman, and Daphne Koller. Maxmargin min-entropy models. In AISTATS, 2012.
[19] Chun-Nam John Yu and Thorsten Joachims. Learning structural SVMs with latent variables.
In Proceedings of the 26th International Conference on Machine Learning, 2009.
[20] M. Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable
models. In Advances in Neural Information Processing Systems 25. 2010.
[21] Andrew Saxe, Pang Wei Koh, Zhenghao Chen, Maneesh Bhand, Bipin Suresh, and Andrew Ng.
On random weights and unsupervised feature learning. In Proceedings of the 28th International
Conference on Machine Learning, 2011.
[22] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report,
Department of Computer Science, University of Toronto, 2009.
[23] Adam Coates and Andrew Y. Ng. Selecting receptive fields in deep networks. In Advances in
Neural Information Processing Systems 25. 2011.
[24] Dan Claudiu Ciresan, Ueli Meier, and J?urgen Schmidhuber. Multi-column deep neural networks for image classification. In Computer Vision and Pattern Recognition, 2012.
9
| 4522 |@word economically:1 version:2 faculty:1 briefly:2 exploitation:2 mockus:1 d2:1 zilinskas:1 covariance:19 pick:1 configuration:3 series:1 quo:1 selecting:2 tuned:1 document:5 outperforms:2 existing:1 freitas:2 current:4 com:2 danny:1 must:7 john:1 enables:1 plot:1 update:1 alone:1 greedy:1 fewer:1 half:1 prohibitive:1 core:5 blei:1 toronto:4 location:2 daphne:2 five:3 branin:5 direct:1 jonas:1 dan:1 introduce:1 acquired:1 x0:2 snoek:1 indeed:2 expected:21 behavior:1 frequently:1 multi:6 ming:1 actual:1 considering:2 solver:1 provided:1 bounded:1 underlying:2 advent:2 lowest:1 what:4 kind:1 fantasy:5 minxn:1 developed:2 finding:4 extremum:1 thorough:1 xd:1 usefully:1 exactly:1 walltime:2 control:1 unit:3 grant:1 yn:21 before:1 positive:2 engineering:2 local:1 service:1 api:1 parallelize:1 solely:2 becoming:1 approximately:2 black:4 twice:1 xbest:2 challenging:3 averaged:1 directed:1 practical:4 yj:2 practice:3 regret:3 definite:1 procedure:11 suresh:1 area:1 empirical:3 maneesh:1 significantly:6 convenient:1 confidence:4 integrating:4 induce:1 word:5 pre:2 protein:4 onto:1 marginalize:1 cannot:1 selection:3 context:3 yee:1 optimize:3 queued:1 map:1 demonstrated:3 www:1 restriction:1 imposed:1 williams:3 straightforward:1 attention:1 duration:5 convex:1 amazon:2 rule:1 iain:1 nam:1 anyway:1 notion:1 controlling:1 play:1 user:1 exact:1 programming:1 carl:1 harvard:2 expensive:7 recognition:1 hagan:1 observed:2 role:1 cloud:1 wang:1 region:1 valuable:1 ran:2 balanced:1 benjamin:2 environment:1 govern:1 ideally:2 sherbrooke:1 ultimately:1 depend:3 solving:1 predictive:2 upon:1 efficiency:1 eric:1 edwin:1 necessitates:1 multimodal:1 joint:1 easily:2 aei:1 darpa:1 various:5 represented:1 emulate:2 distinct:1 describe:1 effective:1 monte:6 kevin:2 outcome:3 choosing:2 exhaustive:1 hyper:1 whose:1 multipeak:1 widely:1 supplementary:2 otherwise:1 gp:77 itself:1 online:7 advantage:2 differentiable:1 wallclock:2 sequence:3 matthias:2 took:1 propose:4 fr:1 mixing:1 flexibility:1 achieve:3 degenerate:1 description:1 intuitive:1 az:1 exploiting:1 convergence:4 chai:1 optimum:1 sea:1 adam:5 coupling:1 develop:1 andrew:3 ard:5 school:1 advocated:1 progress:1 strong:1 tpa:4 c:2 involves:1 predicted:1 larochelle:2 quantify:1 differ:1 come:1 alcb:1 correct:1 stochastic:1 exploration:2 human:5 saxe:3 nando:2 material:2 reagent:1 require:3 generalization:2 opt:7 ryan:2 exploring:2 gradientbased:1 considered:2 ueli:1 normal:1 exp:2 great:1 matthew:1 major:1 optimizer:1 vary:1 early:1 estimation:2 tiesis:1 hoffman:8 minimization:1 cora:1 mit:1 clearly:2 gaussian:27 varying:3 focus:1 joachim:1 improvement:21 likelihood:1 seeger:2 dollar:1 inference:1 motif:3 dependent:2 publically:1 typically:3 integrated:6 hidden:5 bandit:1 koller:2 bhand:1 quasi:1 rpa:1 interested:1 issue:2 overall:1 flexible:1 classification:3 html:1 development:1 art:6 constrained:2 initialize:1 special:1 marginal:1 field:1 construct:3 softmax:1 urgen:1 ng:2 sampling:2 identical:1 holger:1 jones:1 yu:1 unsupervised:1 warrant:2 report:2 yoshua:2 intelligent:1 employ:1 few:1 modern:1 randomly:2 packer:2 pawan:2 interest:1 highly:3 paper3:1 evaluation:36 ouvertes:1 rodolphe:1 mixture:1 yielding:1 chain:2 algorithm2:1 cifar10:1 experience:1 machinery:1 tree:3 initialized:1 acquistion:1 hutter:2 formalism:1 column:1 modeling:2 maximization:1 bull:1 cost:10 tractability:1 surpasses:1 subset:1 krizhevsky:2 conducted:1 firas:1 too:3 reported:4 considerably:1 international:3 sensitivity:2 ec2:1 randomized:1 probabilistic:1 invoke:1 off:2 picking:1 michael:1 parzen:3 quickly:2 squared:4 vastly:1 again:2 manage:1 choose:5 expert:8 reusing:1 account:4 de:2 bergstra:4 summarized:1 sec:1 availability:1 explicitly:1 bonilla:1 depends:1 view:4 try:2 closed:2 matern:2 francis:1 competitive:3 bayes:1 maintains:1 parallel:7 relied:1 contribution:2 minimize:1 pang:1 publicly:2 convolutional:4 variance:2 characteristic:1 who:1 efficiently:3 correspond:2 identify:1 miller:4 roll:2 bayesian:32 thumb:1 carlo:6 notoriously:1 researcher:1 kennedy:1 published:3 processor:1 reach:1 against:1 acquisition:17 james:2 naturally:1 associated:1 sampled:2 gain:1 tunable:1 treatment:7 popular:4 ask:1 dataset:2 vlad:1 knowledge:1 satisfiability:1 amplitude:1 brochu:2 carefully:1 day:2 restarts:1 methodology:2 specify:1 response:2 wei:1 evaluated:6 box:3 done:1 just:1 until:1 hand:1 horizontal:1 ei:71 christopher:2 google:1 minibatch:2 logistic:5 lda:7 behaved:1 hal:2 x0d:1 vytautas:1 requiring:3 y2:1 true:1 brown:1 analytically:1 regularization:2 symmetric:2 illustrated:1 width:1 nuisance:1 inferior:1 self:1 criterion:1 generalized:1 whye:1 complete:1 demonstrate:2 claudiu:1 performs:1 reflection:1 image:2 variational:2 consideration:1 novel:1 recently:4 dy1:1 common:1 superior:3 wikipedia:1 multinomial:1 jasper:3 hugo:2 overview:2 empirically:1 discussed:1 m1:1 marginals:1 refer:1 smoothness:1 tuning:5 automatic:3 grid:23 rd:1 similarly:1 funded:1 calibration:2 money:1 surface:1 etc:1 base:1 hyperopt:1 posterior:8 multivariate:1 recent:2 own:1 optimizing:9 perplexity:1 schmidhuber:1 certain:2 binary:1 usherbrooke:1 discussing:1 captured:1 minimum:4 additional:2 george:1 parallelized:5 determine:2 maximize:2 paradigm:1 converge:1 multiple:6 desirable:1 rj:1 sham:1 smooth:1 technical:1 faster:3 determination:1 match:1 bach:1 cifar:7 prevented:1 award:1 bigger:1 niranjan:1 impact:1 prediction:2 variant:1 regression:6 basic:1 vision:1 expectation:2 arxiv:1 sometimes:1 kernel:4 normalization:1 achieved:1 conditionals:1 unrealistically:1 krause:1 semiparametric:1 aws:1 crucial:1 goodman:1 appropriately:1 extra:1 rest:1 unlike:1 operate:1 strict:1 archive:1 pooling:1 elegant:2 mahendran:1 effectiveness:1 jordan:1 integer:1 hamze:1 structural:1 leverage:1 presence:2 split:2 easy:1 concerned:1 automated:1 bengio:2 marginalization:1 independence:1 xj:4 affect:1 ciresan:1 suboptimal:1 riche:2 idea:1 regarding:1 andreas:1 utility:1 locating:1 hessian:1 nine:2 jj:3 prefers:1 deep:3 useful:2 generally:3 clear:1 involve:1 tune:2 amount:1 repeating:1 ten:1 induces:1 svms:6 differentiability:1 tth:1 dna:1 http:4 dyj:1 outperform:3 wiki:1 kian:1 coates:1 tutorial:1 cuda:1 arising:1 per:22 hyperparameter:6 mat:1 express:2 four:2 nevertheless:2 drawn:1 changing:1 budgeted:1 dahl:1 bardenet:1 n66001:1 fraction:1 run:8 inverse:1 parameterized:1 uncertainty:2 powerful:1 unaugmented:1 decide:1 asynchronicity:1 decision:2 bound:5 layer:5 followed:1 distinguish:1 paced:1 alex:2 x2:1 software:1 dominated:1 emi:1 min:13 kumar:3 performing:3 relatively:1 ern:1 department:3 structured:6 combination:1 hoo:5 across:1 beneficial:1 remain:1 appealing:1 kakade:1 making:2 maxmargin:1 restricted:1 thorsten:1 koh:1 taken:2 computationally:3 resource:1 ln:1 previously:1 remains:1 discus:1 count:2 know:2 subjected:1 tractable:1 available:6 experimentation:1 progression:1 observe:1 hierarchical:1 appropriate:3 batch:2 dirichlet:5 running:3 include:2 completed:1 graphical:1 kushner:1 newton:1 exploit:1 restrictive:1 murray:2 society:1 seeking:1 objective:2 print:1 blend:1 strategy:14 receptive:1 dependence:1 surrogate:1 unclear:1 gradient:2 convnet:1 separate:1 thank:1 capacity:1 sensible:1 bipin:1 topic:5 argue:1 reason:1 spanning:1 xnext:2 assuming:1 length:3 code:6 modeled:2 illustration:2 mini:1 minimizing:1 balance:1 acquire:1 difficult:1 unfortunately:1 taxonomy:1 frank:1 expense:1 design:1 policy:1 unknown:4 perform:3 allowing:1 upper:4 teh:1 observation:7 markov:2 benchmark:4 finite:1 withheld:1 descent:1 kse:1 situation:2 shoulder:1 y1:1 arbitrary:1 parallelizing:1 introduced:3 david:2 meier:1 required:1 specified:3 optimized:1 elapsed:1 established:1 hour:4 zhenghao:1 able:1 suggested:1 alongside:1 parallelism:2 pattern:1 beating:1 regime:1 challenge:1 including:1 memory:1 max:1 royal:1 power:2 critical:1 natural:1 rely:1 indicator:1 nth:1 improve:2 github:1 numerous:2 prior:9 review:2 epoch:2 acknowledgement:1 marginalizing:1 regularisation:2 fully:5 bear:1 mixed:1 limitation:1 allocation:3 proven:1 validation:5 downloaded:1 proxy:1 article:3 tiny:1 translation:1 course:1 repeat:2 free:1 rasmussen:2 allow:1 understand:1 taking:1 tolerance:2 slice:2 feedback:1 default:1 xn:25 vocabulary:2 cumulative:1 rich:1 evaluating:1 curve:1 author:2 made:5 commonly:2 ginsbourger:2 collection:2 avoided:1 adaptive:1 reinforcement:1 preferred:1 status:1 dealing:2 global:7 active:1 assumed:1 consuming:2 ziyu:1 alternatively:1 subsequence:1 search:17 latent:10 continuous:1 nature:2 pending:7 obtaining:1 forest:1 argmaxx:2 improving:2 anthony:1 marc:1 aistats:3 whole:1 noise:3 hyperparameters:41 repeated:5 allowed:1 x1:1 augmented:1 cubic:1 formalization:1 wish:2 exponential:4 third:2 learns:1 young:1 touched:1 minute:1 appeal:1 explored:1 r2:3 chun:1 essential:1 intractable:2 mnist:2 restricting:1 sequential:4 execution:1 conditioned:1 egl:1 margin:1 chen:1 entropy:3 hoos:1 simply:3 likely:3 explore:2 nserc:1 corresponds:1 leyton:1 determines:1 careful:2 towards:1 change:1 specifically:1 determined:1 infinite:2 contrasted:1 justify:1 surpass:1 total:1 experimental:1 ucb:5 rarely:1 select:2 support:3 relevance:1 philosophy:1 evaluate:2 mcmc:37 srinivas:1 |
3,892 | 4,523 | A quasi-Newton proximal splitting method
S. Becker?
M.J. Fadili?
Abstract
A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the
piece-wise linear nature of the dual problem. The second part of the paper applies
the previous result to acceleration of convex minimization problems, and leads
to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications
including signal processing, sparse recovery and machine learning and classification.
1
Introduction
Convex optimization has proved to be extremely useful to all quantitative disciplines of science. A
common trend in modern science is the increase in size of datasets, which drives the need for more
efficient optimization schemes. For large-scale unconstrained smooth convex problems, two classes
of methods have seen the most success: limited memory quasi-Newton methods and non-linear
conjugate gradient (CG) methods. Both of these methods generally outperform simpler methods,
such as gradient descent.
For problems with non-smooth terms and/or constraints, it is possible to generalize gradient descent
with proximal gradient descent (which includes projected gradient descent as a sub-cases), which is
just the application of the forward-backward algorithm [1].
Unlike gradient descent, it is not easy to adapt quasi-Newton and CG methods to problems involving constraints and non-smooth terms. Much work has been written on the topic, and approaches
generally follow an active-set methodology. In the limit, as the active-set is correctly identified, the
methods behave similar to their unconstrained counterparts. These methods have seen success, but
are not as efficient or as elegant as the unconstrained versions. In particular, a sub-problem on the
active-set must be solved, and the accuracy of this sub-iteration must be tuned with heuristics in
order to obtain competitive results.
1.1
Problem statement
PN
Let H = (RN , h?, ?i) equipped
with the usual Euclidean scalar product hx, yi =
i=1 xi yi and
p
associated norm kxk = hx, xi. For a matrix V ? RN ?N in the symmetric positive-definite (SDP)
cone S++ (N ), we define HV = (RN , h?, ?iV ) with the scalar product hx, yiV = hx, V yi and norm
kxkV corresponding to the metric induced by V . The dual space of HV , under h?, ?i, is HV ?1 . We
denote IH the identity operator on H.
A real-valued function f : H ? R ? {+?} is (0)-coercive if limkxk?+? f (x) = +?. The
domain of f is defined by dom f = {x ? H : f (x) < +?} and f is proper if dom f 6= ?. We
say that a real-valued function f is lower semi-continuous (lsc) if lim inf x?x0 f (x) ? f (x0 ). The
?
?
LJLL, CNRS-UPMC, Paris France ([email protected]).
GREYC, CNRS-ENSICAEN-Univ. of Caen, Caen France ([email protected]).
1
class of all proper lsc convex functions from H to R ? {+?} is denoted by ?0 (H). The conjugate
or Legendre-Fenchel transform of f on H is denoted f ? .
Our goal is the generic minimization of functions of the form
min {F (x) , f (x) + h(x)} ,
x?H
(P)
where f, h ? ?0 (H). We also assume the set of minimizers is nonempty (e.g. F is coercive) and that
a standard domain qualification holds. We take f ? C 1 (RN ) with L-Lipschitz continuous gradient,
and we assume h is separable. Write x? to denote an element of Argmin F (x).
The class we consider covers non-smooth convex optimization problems, including those with convex constraints. Here are some examples in regression, machine learning and classification.
Example 1 (LASSO).
1
min kAx ? bk22 + ?kxk1 .
(1)
x?H 2
Example 2 (Non-negative least-squares (NNLS)).
1
min kAx ? bk22
2
x?H
subject to x > 0 .
(2)
Example 3 (Sparse Support Vector Machines). One would like to find a linear decision function
which minimizes the objective
m
1 X
L(hx, zi i + b, yi ) + ?kxk1
x?H m
i=1
min
(3)
where for i = 1, ? ? ? , m, (zi , yi ) ? RN ? {?1} is the training set, and L is a smooth loss function
with Lipschitz-continuous gradient such as the squared hinge loss L(?
yi , yi ) = max(0, 1 ? y?i yi )2 or
the logistic loss L(?
yi , yi ) = log(1 + e??yi yi ).
1.2
Contributions
This paper introduces a class of scaled norms for which we can compute a proximity operator; these
results themselves are significant, for previous results only cover diagonal scaling (the diagonal
scaling result is trivial). Then, motivated by the discrepancy between constrained and unconstrained
performance, we define a class of limited-memory quasi-Newton methods to solve (P) and that
extends naturally and elegantly from the unconstrained to the constrained case. Most well-known
quasi-Newton methods for constrained problems, such as L-BFGS-B [2], are only applicable to box
constraints l ? x ? u. The power of our approach is that it applies to a wide-variety of useful
non-smooth functionals (see ?3.1.4 for a list) and that it does not rely on an active-set strategy. The
approach uses the zero-memory SR1 algorithm, and we provide evidence that the non-diagonal term
provides significant improvements over diagonal Hessians.
2
2.1
Quasi-Newton forward-backward splitting
The algorithm
In the following, define the quadratic approximation
1
2
QB
k (x) = f (xk ) + h?f (xk ), x ? xk i + kx ? xk kB ,
2
(4)
where B ? S++ (N ).
The standard (non relaxed) version of the forward-backward splitting algorithm (also known as
proximal or projected gradient descent) to solve (P) updates to a new iterate xk+1 according to
k
xk+1 = argmin QB
k (x) + h(x) = proxtk h (xk ? tk ?f (xk ))
x
with Bk = t?1
k IH , tk ?]0, 2/L[ (typically tk = 1/L unless a line search is used).
2
(5)
Note that this specializes to the gradient descent when h = 0. Therefore, if f is a strictly convex
quadratic function and one takes Bk = ?2 f (xk ), then we obtain the Newton method. Let?s get back
to h 6= 0. It is now well known that fixed B = LIH is usually a poor choice. Since f is smooth and
can be approximated by a quadratic, and inspired by quasi-Newton methods, this suggest picking
Bk as an approximation of the Hessian. Here we propose a diagonal+rank 1 approximation.
Our diagonal+rank 1 quasi-Newton forward-backward splitting algorithm is listed in Algorithm 1
(with details for the quasi-Newton update in Algorithm 2, see ?4 for details). These algorithms
are listed as simply as possible to emphasize their important components; the actual software
used for numerical tests is open-source and available at http://www.greyc.ensicaen.fr/
?jfadili/software.html.
Algorithm 1: Zero-memory Symmetric Rank 1 (0SR1) algorithm to solve min f + h
Require: x0 ? dom(f + h), Lipschitz constant estimate L of ?f , stopping criterion
1: for k = 1, 2, 3, . . . do
2:
sk ? xk ? xk?1
3:
yk ? ?f (xk ) ? ?f (xk?1 )
4:
Compute Hk via Algorithm 2, and define Bk = Hk?1 .
5:
Compute the rank-1 proximity operator (see ?3)
k
x
?k+1 ? proxB
h (xk ? Hk ?f (xk ))
(6)
6:
pk ? x
?k+1 ? xk and terminate if kpk k <
7:
Line-search along the ray xk + tpk to determine xk+1 , or choose t = 1.
8: end for
2.2
Relation to prior work
First-order methods The algorithm in (5) is variously known as proximal descent or iterated
shrinkage/thresholding algorithm (IST or ISTA). It has a grounded convergence theory, and also
admits over-relaxation factors ? ? (0, 1) [3].
The spectral projected gradient (SPG) [4] method was designed as an extension of the BarzilaiBorwein spectral step-length method to constrained problems. In [5], it was extended to non-smooth
problems by allowing general proximity operators; The Barzilai-Borwein method [6] uses a specific
choice of step-length tk motivated by quasi-Newton methods. Numerical evidence suggests the
SPG/SpaRSA method is highly effective, although convergence results are not as strong as for ISTA.
FISTA [7] is a multi-step accelerated version of ISTA inspired by the work of Nesterov. The stepsize
t is chosen in a similar way to ISTA; in our implementation, we tweak the original approach by using
a Barzilai-Borwein step size, a standard line search, and restart[8], since this led to improved performance. Nesterov acceleration can be viewed as an over-relaxed version of ISTA with a specific,
non-constant over-relaxation parameter ?k .
The above approaches assume Bk is a constant diagonal. The general diagonal case was considered
in several papers in the 1980s as a simple quasi-Newton method, but never widely adapted. More
recent attempts include a static choice Bk ? B for a primal-dual method [9]. A convergence rate
analysis of forward-backward splitting with static and variable Bk where one of the operators is
maximal strongly monotone is given in [10].
Active set approaches Active set methods take a simple step, such as gradient projection, to identify active variables, and then uses a more advanced quadratic model to solve for the free variables. A
well-known such method is L-BFGS-B [2, 11] which handles general box-constrained problems; we
test an updated version [12]. A recent bound-constrained solver is ASA [13] which uses a conjugate
gradient (CG) solver on the free variables, and shows good results compared to L-BFGS-B, SPG,
GENCAN and TRON. We also compare to several active set approaches specialized for `1 penalties:
?Orthant-wise Learning? (OWL) [14], ?Projected Scaled Sub-gradient + Active Set? (PSSas) [15],
?Fixed-point continuation + Active Set? (FPC AS) [16], and ?CG + IST? (CGIST) [17].
3
Other approaches By transforming the problem into a standard conic programming problem, the
generic problem is amenable to interior-point methods (IPM). IPM requires solving a Newton-step
equation, so first-order like ?Hessian-free? variants of IPM solve the Newton-step approximately,
either by approximately solving the equation or by subsampling the Hessian. The main issues are
speed and robust stopping criteria for the approximations.
Yet another approach is to include the non-smooth h term in the quadratic approximation. Yu et
al. [18] propose a non-smooth modification of BFGS and L-BFGS, and test on problems where h is
typically a hinge-loss or related function.
The projected quasi-Newton (PQN) algorithm [19, 20] is perhaps the most elegant and logical extension of quasi-Newton methods, but it involves solving a sub-iteration. PQN proposes the SPG [4]
algorithm for the subproblems, and finds that this is an efficient tradeoff whenever the cost function (which is not involved in the sub-iteration) is relatively much more expensive to evaluate than
projecting onto the constraints. Again, the cost of the sub-problem solver (and a suitable stopping
criteria for this inner solve) are issues. As discussed in [21], it is possible to generalize PQN to general non-smooth problems whenever the proximity operator is known (since, as mentioned above, it
is possible to extend SPG to this case).
3
Proximity operators and proximal calculus
For space limitation reasons, we only recall essential definitions. More notions, results from convex
analysis as well as proofs can be found in the supplementary material.
Definition 4 (Proximity operator [22]). Let h ? ?0 (H). Then, for every x ? H, the function
2
z 7? 21 kx ? zk + h(z) achieves its infimum at a unique point denoted by proxh x. The uniquelyvalued operator proxh : H ? H thus defined is the proximity operator or proximal mapping of
h.
3.1
Proximal calculus in HV
Throughout, we denote proxVh = (IHV + V ?1 ?h)?1 , where ?h is the subdifferential of h, the
proximity operator of h w.r.t. the norm endowing HV for some V ? S++ (N ). Note that since
V ? S++ (N ), the proximity operator proxVh is well-defined.
Lemma 5 (Moreau identity in HV ). Let h ? ?0 (H), then for any x ? H
?1
proxV?h? (x) + ?V ?1 ? proxVh/? ?V (x/?) = x, ? 0 < ? < +? .
(7)
Corollary 6.
?1
proxVh (x) = x ? V ?1 ? proxVh? ?V (x) .
3.1.1
(8)
Diagonal+rank-1: General case
Theorem 7 (Proximity operator in HV ). Let h ? ?0 (H) and V = D + uuT , where D is diagonal
with (strictly) positive diagonal elements di , and u ? RN . Then,
proxVh (x) = D?1/2 ? proxh?D?1/2 (D1/2 x ? v) ,
(9)
u and ? is the unique root of
D
E
p(?) = u, x ? D?1/2 ? proxh?D?1/2 ?D1/2 (x ? ?D?1 u) + ? ,
(10)
where v = ?D
?1/2
which
P 2 is a Lipschitz continuous and strictly increasing function on R with Lipschitz constant 1 +
i ui /di .
Remark 8.
? Computing proxVh amounts to solving a scalar optimization problem that involves the computation of proxh?D?1/2 . The latter can be much simpler to compute as D is diagonal
(beyond the obvious separable case that we will consider shortly). This is typically the
case when h is the indicator of the `1 -ball or the canonical simple. The corresponding projector can be obtained in expected complexity O(N log N ) by simple sorting the absolute
values
4
? It is of course straightforward to compute proxVh? from proxVh either using Theorem 7, or
using this theorem together with Corollary 6 and the Sherman-Morrison inversion lemma.
3.1.2
Diagonal+rank-1: Separable case
The following corollary is key to our novel optimization algorithm.
PN
Corollary 9. Assume that h ? ?0 (H) is separable, i.e. h(x) = i=1 hi (xi ), and V = D + uuT ,
where D is diagonal with (strictly) positive diagonal elements di , and u ? RN . Then,
(11)
proxVh (x) = proxhi /di (xi ? vi /di ) ,
i
where v = ?u and ? is the unique root of
E
D
+?,
p(?) = u, x ? proxhi /di (xi ? ?ui /di )
(12)
i
which is a Lipschitz continuous and strictly increasing function on R.
is
Proof:
diagonal,
As
applying
h
is
separable
and
Theorem
7
yields
D
the
?
desired
S++ (N )
result.
Proposition 10. Assume that for 1 6 i 6 N , proxhi is piecewise affine on R with ki ? 1 segments,
i.e.
proxhi (xi ) = aj xi + bj , tj 6 xi 6 tj+1 , j ? {1, . . . , ki } .
PN
V
Let
i=1
k =
ki . Then proxh (x) can be obtained exactly by sorting at most the k real values
di
.
ui (xi ? tj )
(i,j)?{1,...,N }?{1,...,ki }
Proof: Recall that (10) has a unique solution. When proxhi is piecewise affine with ki
segments, it is easy to see that p(?)in (12) is also
piecewise affine with slopes and intercepts
changing at the k transition points
di
ui (xi
? tj )
(i,j)?{1,...,N }?{1,...,ki }
. To get ?? , it is suf-
ficient to isolate the unique segment that intersects the abscissa axis. This can be achieved
by sorting the values of the transition points which can cost in average complexity O(k log k).
Remark 11.
? The above computational cost can be reduced in many situations by exploiting e.g. symmetry of the h0i s, identical functions, etc. This turns out to be the case for many functions of
interest, e.g. `1 -norm, indicator of the `? -ball or the positive orthant, and many others;
see examples hereafter.
? Corollary 9 can be extended to the ?block? separable (i.e. separable in subsets of coordinates) when D is piecewise constant along the same block indices.
3.1.3
Semi-smooth Newton method
In many situations (see examples below), the root of p(?) can be found exactly in polynomial
complexity. If no closed-form is available, one can appeal to some efficient iterative method to
solve (10) (or (12)). As p is Lipschitz-continuous, hence so-called Newton (slantly) differentiable,
semi-smooth Newton are good such solvers, with the proviso that one can design a simple slanting
function which can be algorithmically exploited.
The semi-smooth Newton method for the solution of (10) can be stated as the iteration
?t+1 = ?t ? g(?t )?1 p(?t ) ,
(13)
where g is a generalized derivative of p.
Proposition 12 (Generalized derivative of p). If proxh?D?1/2 is Newton differentiable with generalized derivative G, then so is the mapping p with a generalized derivative
D
E
g(?) = 1 + u, D?1/2 ? G(D1/2 x ? ?D?1/2 u) ? D?1/2 u .
Furthermore, g is nonsingular with a uniformly bounded inverse on R.
5
Function h
Algorithm
`1 -norm
Hinge
`? -ball
Box constraint
Positivity constraint
`1 -ball
`? -norm
Canonical simplex
max function
Separable: exact in O(N log N )
Separable: exact in O(N log N )
Separable: exact in O(N log N ) from `1 -norm by Moreau-identity
Separable: exact in O(N log N )
Separable: exact in O(N log N )
Nonseparable: semismooth Newton and proxh?D?1/2 costs O(N log N )
Nonseparable: from projector on the `1 -ball by Moreau-identity
Nonseparable: semismooth Newton and proxh?D?1/2 costs O(N log N )
Nonseparable: from projector on the simplex by Moreau-identity
Table 1: Summary of functions which have efficiently computable rank-1 proximity operators
Proof:
This follows from linearity and the chain rule [23, Lemma 3.5]. The second statement follows strict increasing monotonicity of p as established in Theorem 7.
Thus, as p is Newton differentiable with nonsingular generalized derivative whose inverse is also
bounded, the general semi-smooth Newton convergence theorem implies that (13) converges superlinearly to the unique root of (10).
3.1.4
Examples
Many functions can be handled very efficiently using our results above. For instance, Table 1 summarizes a few of them where we can obtain either an exact answer by sorting when possible, or else
by minimizing w.r.t. to a scalar variable (i.e. finding the unique root of (10)).
4
A primal rank 1 SR1 algorithm
Following the conventional quasi-Newton notation, we let B denote an approximation to the Hessian
of f and H denote an approximation to the inverse Hessian. All quasi-Newton methods update an
approximation to the (inverse) Hessian that satisfies the secant condition:
Hk yk = sk ,
yk = ?f (xk ) ? ?f (xk?1 ),
sk = xk ? xk?1
(14)
Algorithm 1 follows the SR1 method [24], which uses a rank-1 update to the inverse Hessian approximation at every step. The SR1 method is perhaps less well-known than BFGS, but it has the
crucial property that updates are rank-1, rather than rank-2, and it is described ?[SR1] has now taken
its place alongside the BFGS method as the pre-eminent updating formula.? [25].
We propose two important modifications to SR1. The first is to use limited-memory, as is commonly
done with BFGS. In particular, we use zero-memory, which means that at every iteration, a new
diagonal plus rank-one matrix is formed. The other modification is to extend the SR1 method to
the general setting of minimizing f + h where f is smooth but h need not be smooth; this further
generalizes the case when h is an indicator function of a convex set. Every step of the algorithm
replaces f with a quadratic approximation, and keeps h unchanged. Because h is left unchanged,
the subgradient of h is used in an implicit manner, in comparison to methods such as [18] that use
an approximation to h as well and therefore take an explicit subgradient step.
Choosing H0
step length
In our experience, the choice of H0 is best if scaled with a Barzilai-Borwein spectral
?BB2 = hsk , yk i / hyk , yk i
(15)
(we call it ?BB2 to distinguish it from the other Barzilai-Borwein step size ?BB1 =
hsk , sk i / hsk , yk i > ?BB2 ).
In SR1 methods, the quantity hsk ? H0 yk , yk i must be positive in order to have a well-defined
update for uk . The update is:
p
(16)
Hk = H0 + uk uTk , uk = (sk ? H0 yk )/ hsk ? H0 yk , yk i.
6
Algorithm 2: Sub-routine to compute the approximate inverse Hessian Hk
Require: k, sk , yk , 0 < ? < 1, 0 < ?min < ?max
1: if k = 1 then
2:
H0 ? ? IH where ? > 0 is arbitrary
3:
uk ? 0
4: else
5:
?BB2 ? hskyk ,ykk2i
{Barzilai-Borwein step length}
k
6:
Project ?BB2 onto [?min , ?max ]
7:
H0 ? ??BB2 IH
8:
if hsk ? H0 yk , yk i ? 10?8 kyk k2 ksk ? H0 yk k2 then
9:
uk ? 0
{Skip the quasi-Newton update}
10:
else
p
11:
uk ? (sk ? H0 yk )/ hsk ? H0 yk , yk i).
12:
end if
13: end if
?1
14: return Hk = H0 + uk uT
k {Bk = Hk can be computed via the Sherman-Morrison formula}
For this reason, we choose H0 = ??BB2 IH with 0 < ? < 1, and thus 0 ? hsk ? H0 yk , yk i =
(1 ? ?) hsk , yk i. If hsk , yk i = 0, then there is no symmetric rank-one update that satisfies the
secant condition. The inequality hsk , yk i > 0 is the curvature condition, and it is guaranteed for
all strictly convex objectives. Following the recommendation in [26], we skip updates whenever
hsk , yk i cannot be guaranteed to be non-zero given standard floating-point precision.
A value of ? = 0.8 works well in most situations. We have tested picking ? adaptively, as well as
trying H0 to be non-constant on the diagonal, but found no consistent improvements.
5
Numerical experiments and comparisons
4
10
0?mem SR1
FISTA w/ BB
SPG/SpaRSA
L?BFGS?B
ASA
PSSas
OWL
CGIST
FPC?AS
objective value error
0
10
6
10
4
10
objective value error
2
10
0?mem SR1
FISTA w/ BB
SPG/SpaRSA
L?BFGS?B
ASA
PSSas
OWL
CGIST
FPC?AS
8
10
?2
10
?4
10
2
10
0
10
?2
10
?4
10
?6
10
?6
10
?8
?8
10
0
10
20
30
40
50
60
70
time in seconds
80
90
100
110
(a)
10
0
0.5
1
1.5
time in seconds
2
2.5
(b)
Figure 1: (a) is first LASSO test, (b) is second LASSO test
Consider the unconstrained LASSO problem (1). Many codes, such as [27] and L-BFGS-B [2],
handle only non-negativity or box-constraints. Using the standard change of variables by introducing
the positive and negative parts of x, the LASSO can be recast as
min
x+ ,x? >0
1
kAx+ ? Ax? ? bk2 + ?1T (x+ + x? )
2
and then x is recovered via x = x+ ? x? . With such a formulation solvers such as L-BFGS-B are
applicable. However, this constrained problem has twice the number of variables, and the Hessian of
7
AT A ?AT A
?AT A AT A
n degenerate 0 eigenvalues and adversely affects solvers.
the quadratic part changes from A A to A? =
T
which necessarily has (at least)
A similar situation occurs with the hinge-loss function. Consider the shifted and reversed hinge loss
function h(x) = max(0, x). Then one can split x = x+ ? x? , add constraints x+ > 0, x? > 0,
and replace h(x) with 1T (x+ ). As before, the Hessian gains n degenerate eigenvalues.
We compared our proposed algorithm on the LASSO problem. The first example, in Fig. 1a, is a
typical example from compressed sensing that takes A ? Rm?n to have iid N (0, 1) entries with
m = 1500 and n = 3000. We set ? = 0.1. L-BFGS-B does very well, followed closely by
our proposed SR1 algorithm and PSSas. Note that L-BFGS-B and ASA are in Fortran and C,
respectively (the other algorithms are in Matlab).
Our second example uses a square operator A with dimensions n = 133 = 2197 chosen as a
3D discrete differential operator. This example stems from a numerical analysis problem to solve a
discretized PDE as suggested by [28]. For this example, we set ? = 1. For all the solvers, we use the
same parameters as in the previous example. Unlike the previous example, Fig. 1b now shows that
L-BFGS-B is very slow on this problem. The FPC-AS method, very slow on the earlier test, is now
the fastest. However, just as before, our SR1 method is nearly as good as the best algorithm. This
robustness is one benefit of our approach, since the method does not rely on active-set identifying
parameters and inner iteration tolerances.
6
Conclusions
In this paper, we proposed a novel variable metric (quasi-Newton) forward-backward splitting algorithm, designed to efficiently solve non-smooth convex problems structured as the sum of a smooth
term and a non-smooth one. We introduced a class of weighted norms induced by a diagonal+rank
1 symmetric positive definite matrices, and proposed a whole framework to compute a proximity
operator in the weighted norm. The latter result is distinctly new and is of independent interest.
We also provided clear evidence that the non-diagonal term provides significant acceleration over
diagonal matrices.
The proposed method can be extended in several ways. Although we focused on forward-backward
splitting, our approach can be easily extended to the new generalized forward-backward algorithm
of [29]. However, if we switch to a primal-dual setting, which is desirable because it can handle
more complicated objective functionals, updating Bk is non-obvious. Though one can think of
non-diagonal pre-conditioning methods.
Another improvement would be to derive efficient calculation for rank-2 proximity terms, thus allowing a 0-memory BFGS method. We are able to extend (result not presented here) Theorem 7
to diagonal+rank r matrices. However, in general, one must solve an r-dimensional inner problem
using the semismooth Newton method.
A final possible extension is to take Bk to be diagonal plus rank-1 on diagonal blocks, since if
h is separable, this is still can be solved by our algorithm (see Remark 10). The challenge here
is adapting this to a robust quasi-Newton update. For some matrices that are well-approximated
by low-rank blocks, such as H-matrices [30], it may be possible to choose Bk ? B to be a fixed
preconditioner.
Acknowledgments
SB would like to acknowledge the Fondation Sciences Math?ematiques de Paris for his fellowship.
References
[1] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces.
Springer-Verlag, New York, 2011.
[2] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Computing, 16(5):1190?1208, 1995.
8
[3] P. L. Combettes and J. C. Pesquet. Proximal splitting methods in signal processing. In H. H. Bauschke,
R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, editors, Fixed-Point Algorithms
for Inverse Problems in Science and Engineering, pages 185?212. Springer-Verlag, New York, 2011.
[4] E. G. Birgin, J. M. Mart??nez, and M. Raydan. Nonmonotone spectral projected gradient methods on
convex sets. SIAM J. Optim., 10(4):1196?1211, 2000.
[5] S. Wright, R. Nowak, and M. Figueiredo. Sparse reconstruction by separable approximation. IEEE
Transactions on Signal Processing, 57, 2009. 2479?2493.
[6] J. Barzilai and J. Borwein. Two point step size gradient method. IMA J. Numer. Anal., 8:141?148, 1988.
[7] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM J. on Imaging Sci., 2(1):183?202, 2009.
[8] B. O?Donoghue and E. Cand`es. Adaptive restart for accelerated gradient schemes. Preprint:
arXiv:1204.3982, 2012.
[9] T. Pock and A. Chambolle. Diagonal preconditioning for first order primal-dual algorithms in convex
optimization. In ICCV, 2011.
[10] G. H.-G. Chen and R. T. Rockafellar. Convergence rates in forward?backward splitting. SIAM Journal
on Optimization, 7(2):421?444, 1997.
[11] C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal. Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale
bound-constrained optimization. ACM Trans. Math. Software, 23(4):550?560, 1997.
[12] Jos?e Luis Morales and Jorge Nocedal. Remark on a? lgorithm 778: L-BFGS-B: Fortran subroutines for
large-scale bound constrained optimization?. ACM Trans. Math. Softw., 38(1):7:1?7:4, 2011.
[13] W. W. Hager and H. Zhang. A new active set algorithm for box constrained optimization. SIAM J. Optim.,
17:526?557, 2006.
[14] A. Andrew and J. Gao. Scalable training of l1 -regularized log-linear models. In ICML, 2007.
[15] M. Schmidt, G. Fung, and R. Rosales. Fast optimization methods for l1 regularization: A comparative
study and two new approaches. In European Conference on Machine Learning, 2007.
[16] Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang. A fast algorithm for sparse reconstruction based on shrinkage,
subspace optimization and continuation. SIAM J. Sci. Comput., 32(4):1832?1857, 2010.
[17] T. Goldstein and S. Setzer. High-order methods for basis pursuit. Technical report, CAM-UCLA, 2011.
[18] J. Yu, S.V.N. Vishwanathan, S. Guenter, and N. Schraudolph. A quasi-Newton approach to nonsmooth
convex optimization problems in machine learning. J. Machine Learning Research, 11:1145?1200, 2010.
[19] M. Schmidt, E. van den Berg, M. Friedlander, and K. Murphy. Optimizing costly functions with simple
constraints: A limited-memory projected quasi-Newton algorithm. In AISTATS, 2009.
[20] M. Schmidt, D. Kim, and S. Sra. Projected Newton-type methods in machine learning. In S. Sra,
S. Nowozin, and S.Wright, editors, Optimization for Machine Learning. MIT Press, 2011.
[21] J. D. Lee, Y. Sun, and M. A. Saunders. Proximal Newton-type methods for minimizing convex objective
functions in composite form. Preprint: arXiv:1206.1623, 2012.
[22] J.-J. Moreau. Fonctions convexes duales et points proximaux dans un espace hilbertien. CRAS S?er. A
Math., 255:2897?2899, 1962.
[23] R. Griesse and D. A. Lorenz. A semismooth Newton method for Tikhonov functionals with sparsity
constraints. Inverse Problems, 24(3):035007, 2008.
[24] C. Broyden. Quasi-Newton methods and their application to function minimization. Math. Comp.,
21:577?593, 1967.
[25] N. Gould. Seminal papers in nonlinear optimization. In An introduction to algorithms for continuous
optimization. Oxford University Computing Laboratory, 2006. http://www.numerical.rl.ac.
uk/nimg/course/lectures/paper/paper.pdf.
[26] J. Nocedal and S. Wright. Numerical Optimization. Springer, 2nd edition, 2006.
[27] I. Dhillon, D. Kim, and S. Sra. Tackling box-constrained optimization via a new projected quasi-Newton
approach. SIAM J. Sci. Comput., 32(6):3548?3563, 2010.
[28] Roger Fletcher. On the Barzilai-Borwein method. In Liqun Qi, Koklay Teo, Xiaoqi Yang, Panos M.
Pardalos, and Donald W. Hearn, editors, Optimization and Control with Applications, volume 96 of Applied Optimization, pages 235?256. Springer US, 2005.
[29] H. Raguet, J. Fadili, and G. Peyr?e. Generalized forward-backward splitting. Technical report, Preprint
Hal-00613637, 2011.
[30] W. Hackbusch. A sparse matrix arithmetic based on H-matrices. Part I: Introduction to H-matrices. Computing, 62:89?108, 1999.
[31] R. T. Rockafellar. Convex Analysis. Princeton University Press, 1970.
9
| 4523 |@word version:5 inversion:1 polynomial:1 norm:11 nd:1 open:1 calculus:2 ipm:3 hager:1 hereafter:1 tuned:1 nonmonotone:1 recovered:1 optim:2 hearn:1 yet:1 tackling:1 written:1 must:4 luis:1 numerical:6 designed:2 update:11 kyk:1 xk:22 eminent:1 provides:2 math:5 simpler:2 zhang:2 along:2 differential:1 ray:1 manner:1 x0:3 expected:1 themselves:1 abscissa:1 sdp:1 multi:1 nonseparable:4 discretized:1 cand:1 inspired:2 byrd:2 actual:1 equipped:1 solver:7 increasing:3 project:1 provided:1 bounded:2 linearity:1 notation:1 elser:1 argmin:2 superlinearly:1 minimizes:1 coercive:2 finding:1 quantitative:1 every:4 proximaux:1 sr1:13 fpc:4 exactly:2 scaled:4 k2:2 uk:8 rm:1 control:1 positive:7 before:2 engineering:1 pock:1 qualification:1 limit:1 oxford:1 approximately:2 plus:2 twice:1 suggests:1 luke:1 fastest:1 limited:5 unique:7 acknowledgment:1 block:4 definite:2 secant:2 adapting:1 composite:1 projection:1 pre:2 donald:1 suggest:1 get:2 onto:2 interior:1 cannot:1 operator:19 applying:1 intercept:1 seminal:1 www:2 conventional:1 projector:3 straightforward:1 fadili:3 convex:18 focused:1 splitting:10 recovery:1 identifying:1 rule:1 d1:3 his:1 handle:3 notion:1 coordinate:1 nnls:1 updated:1 barzilai:7 exact:6 programming:1 us:6 trend:1 element:3 approximated:2 expensive:1 updating:2 kxk1:2 preprint:3 solved:2 hv:7 sun:1 yk:24 mentioned:1 transforming:1 ui:4 complexity:3 nesterov:2 cam:1 dom:3 solving:4 segment:3 asa:4 basis:1 preconditioning:1 easily:1 caen:2 intersects:1 univ:1 fast:3 describe:1 effective:1 choosing:1 h0:16 saunders:1 whose:1 heuristic:1 widely:1 valued:2 solve:10 say:1 supplementary:1 compressed:1 think:1 transform:1 hilbertien:1 final:1 differentiable:3 eigenvalue:2 propose:3 reconstruction:2 product:2 maximal:1 fr:3 dans:1 degenerate:2 exploiting:1 convergence:5 comparative:1 converges:1 tk:4 derive:1 andrew:1 ac:1 strong:1 involves:2 implies:1 skip:2 rosales:1 closely:1 kb:1 material:1 owl:3 proviso:1 pardalos:1 require:2 hx:5 proposition:2 slanting:1 strictly:6 extension:3 hold:1 proximity:15 considered:1 wright:3 fletcher:1 mapping:2 bj:1 achieves:1 applicable:2 teo:1 weighted:2 minimization:3 mit:1 rather:1 pn:3 shrinkage:3 jalal:1 corollary:5 hyk:1 derived:1 ax:1 improvement:3 raydan:1 rank:18 hk:8 cg:4 kim:2 hsk:12 minimizers:1 cnrs:2 stopping:3 sb:1 typically:3 pqn:3 relation:1 sparsa:3 quasi:23 france:2 subroutine:2 issue:2 dual:5 classification:2 html:1 denoted:3 proposes:1 art:1 constrained:12 never:1 softw:1 identical:1 yu:2 icml:1 nearly:1 discrepancy:1 simplex:2 others:1 report:2 piecewise:4 nonsmooth:1 few:1 wen:1 modern:1 espace:1 variously:1 floating:1 ima:1 beck:1 murphy:1 attempt:1 interest:2 highly:1 ensicaen:3 numer:1 introduces:1 primal:4 tj:4 lih:1 chain:1 amenable:1 bb1:1 limkxk:1 nowak:1 experience:1 unless:1 iv:1 euclidean:1 desired:1 fenchel:1 instance:1 earlier:1 teboulle:1 cover:2 cost:6 tweak:1 introducing:1 subset:1 entry:1 peyr:1 bauschke:2 upmc:2 answer:1 proximal:9 adaptively:1 siam:7 lee:1 discipline:1 picking:2 together:1 jos:1 squared:1 borwein:7 again:1 choose:3 positivity:1 adversely:1 derivative:5 return:1 bfgs:18 de:1 includes:1 rockafellar:2 vi:1 piece:1 root:5 closed:1 competitive:1 yiv:1 complicated:1 slope:1 contribution:1 square:2 formed:1 accuracy:1 efficiently:3 yield:1 identify:1 nonsingular:2 generalize:2 iterated:1 iid:1 lu:2 comp:1 drive:1 kpk:1 whenever:3 definition:2 against:1 involved:1 obvious:2 naturally:1 associated:1 proof:4 di:9 static:2 gain:1 proved:1 wolkowicz:1 logical:1 recall:2 lim:1 ut:1 hilbert:1 routine:1 goldstein:1 back:1 proxh:9 follow:1 methodology:1 improved:1 formulation:1 done:1 box:6 strongly:1 though:1 furthermore:1 just:2 implicit:1 chambolle:1 roger:1 preconditioner:1 nonlinear:1 logistic:1 infimum:1 aj:1 perhaps:2 greyc:3 hal:1 lgorithm:1 counterpart:1 hence:1 regularization:1 symmetric:4 laboratory:1 goldfarb:1 dhillon:1 criterion:3 generalized:7 trying:1 guenter:1 pdf:1 tron:1 l1:2 wise:2 novel:2 common:1 endowing:1 specialized:1 rl:1 conditioning:1 volume:1 discussed:1 extend:3 significant:3 bk22:2 fonctions:1 broyden:1 unconstrained:6 sherman:2 etc:1 add:1 curvature:1 recent:2 optimizing:1 inf:1 kxkv:1 tikhonov:1 certain:1 verlag:2 inequality:1 success:2 jorge:1 yi:12 exploited:1 seen:2 relaxed:2 utk:1 determine:1 morrison:2 arithmetic:1 signal:3 semi:5 stephen:1 desirable:1 stem:1 smooth:20 technical:2 adapt:1 calculation:3 schraudolph:1 pde:1 qi:1 kax:3 involving:1 regression:1 variant:1 scalable:1 panos:1 metric:2 arxiv:2 iteration:6 grounded:1 achieved:1 subdifferential:1 fellowship:1 else:3 source:1 crucial:1 duales:1 unlike:2 strict:1 induced:2 subject:1 elegant:3 isolate:1 call:1 yang:1 split:1 easy:2 variety:1 iterate:1 affect:1 zi:2 switch:1 pesquet:1 identified:1 lasso:6 inner:3 tradeoff:1 computable:1 donoghue:1 motivated:2 handled:1 becker:2 setzer:1 penalty:1 ficient:1 hessian:11 york:2 remark:4 matlab:1 useful:3 generally:2 clear:1 listed:2 amount:1 reduced:1 http:2 continuation:2 outperform:1 canonical:2 shifted:1 algorithmically:1 correctly:1 write:1 discrete:1 ist:2 key:1 changing:1 backward:10 nocedal:4 imaging:1 relaxation:2 monotone:2 subgradient:2 cone:1 sum:1 inverse:9 extends:1 throughout:1 place:1 decision:1 summarizes:1 scaling:2 bound:4 hi:1 ki:6 guaranteed:2 distinguish:1 followed:1 quadratic:7 replaces:1 adapted:1 burachik:1 constraint:11 vishwanathan:1 software:3 ucla:1 speed:1 extremely:1 min:8 qb:2 separable:14 relatively:1 gould:1 structured:1 fung:1 according:1 convexes:1 ball:5 poor:1 conjugate:3 legendre:1 liqun:1 h0i:1 modification:3 projecting:1 iccv:1 den:1 taken:1 equation:2 turn:1 nonempty:1 fortran:3 end:3 available:2 generalizes:1 pursuit:1 bb2:7 generic:2 spectral:4 stepsize:1 alternative:1 robustness:1 shortly:1 ematiques:1 schmidt:3 original:1 include:2 subsampling:1 hinge:5 newton:39 exploit:1 unchanged:2 objective:6 quantity:1 occurs:1 strategy:1 costly:1 usual:1 diagonal:26 gradient:17 subspace:1 reversed:1 sci:4 restart:2 topic:1 trivial:1 reason:2 length:4 code:1 index:1 minimizing:3 semismooth:4 statement:2 favorably:1 subproblems:1 negative:2 stated:1 implementation:3 design:1 proper:2 anal:1 allowing:2 datasets:1 acknowledge:1 descent:8 behave:1 orthant:2 situation:4 extended:4 rn:7 arbitrary:1 bk:11 introduced:1 paris:2 extensive:1 established:1 trans:2 beyond:1 suggested:1 alongside:1 usually:1 below:1 able:1 sparsity:1 challenge:1 recast:1 max:5 including:2 memory:9 ihv:1 power:1 suitable:1 rely:2 regularized:1 indicator:3 advanced:1 zhu:2 scheme:2 conic:1 axis:1 specializes:1 negativity:1 prior:1 friedlander:1 loss:6 lecture:1 ksk:1 suf:1 limitation:1 raguet:1 affine:3 consistent:1 thresholding:2 bk2:1 editor:3 nowozin:1 morale:1 course:2 summary:1 free:3 figueiredo:1 wide:1 absolute:1 sparse:5 moreau:5 benefit:1 tolerance:1 distinctly:1 dimension:1 van:1 transition:2 birgin:1 forward:10 commonly:1 adaptive:1 projected:9 transaction:1 functionals:3 bb:2 approximate:1 emphasize:1 keep:1 monotonicity:1 active:12 mem:2 xi:10 continuous:7 search:3 iterative:2 un:1 sk:7 hackbusch:1 table:2 nature:1 terminate:1 robust:2 zk:1 sra:3 symmetry:1 lsc:2 necessarily:1 european:1 domain:2 elegantly:1 spg:7 aistats:1 pk:1 main:1 whole:1 edition:1 ista:5 fig:2 slow:2 combettes:3 precision:1 sub:8 explicit:1 comput:2 fondation:1 theorem:7 formula:2 specific:2 uut:2 sensing:1 list:1 appeal:1 admits:1 er:1 evidence:3 essential:1 ih:5 lorenz:1 kx:2 sorting:4 chen:1 led:1 yin:1 simply:1 nez:1 gao:1 kxk:1 scalar:4 recommendation:1 applies:2 springer:4 satisfies:2 acm:2 mart:1 identity:5 goal:1 viewed:1 acceleration:3 lipschitz:7 replace:1 change:2 fista:3 typical:1 uniformly:1 lemma:3 called:1 e:1 berg:1 support:1 latter:2 accelerated:2 evaluate:1 princeton:1 tested:1 |
3,893 | 4,524 | A provably efficient simplex algorithm for
classification
Elad Hazan ?
Technion - Israel Inst. of Tech.
Haifa, 32000
[email protected]
Zohar Karnin
Yahoo! Research
Haifa
[email protected]
Abstract
We present a simplex algorithm for linear programming in a linear classification
formulation. The paramount complexity parameter in linear classification problems is called the margin. We prove that for margin values of practical interest
our simplex variant performs a polylogarithmic number of pivot steps in the worst
case, and its overall running time is near linear. This is in contrast to general linear
programming, for which no sub-polynomial pivot rule is known.
1
Introduction
Linear programming is a fundamental mathematical model with numerous applications in both combinatorial and continuous optimization. The simplex algorithm for linear programming is a cornerstone of operations research. Despite being one of the most useful algorithms ever designed, not
much is known about its theoretical properties.
As of today, it is unknown whether a variant of the simplex algorithm (defined by a pivot rule) exists
which makes it run in strongly polynomial time. Further, the simplex algorithm, being a geometrical algorithm that is applied to polytopes defined by linear programs, relates to deep questions in
geometry. Perhaps the most famous of which is the ?polynomial Hirsh conjecture?, that states that
the diameter of a polytope is polynomial in its dimension and the number of its facets.
In this paper we analyze a simplex-based algorithm which is guaranteed to run in worst-case polynomial time for large class of practically-interesting linear programs that arise in machine learning, namely linear classification problems. Further, our simplex algorithm performs only a polylogarithmic number of pivot steps and overall near linear running time. The only previously known
poly-time simplex algorithm performs a polynomial number of pivot steps [KS06].
1.1
Related work
The simplex algorithm for linear programming was invented by Danzig [Dan51]. In the sixty years
that have passed, numerous attempts have been made to devise a polynomial time simplex algorithm.
Various authors have proved polynomial bounds on the number of pivot steps required by simplex
variants for inputs that are generated by various distributions, see e.g. [Meg86] as well as articles
referenced therein. However, worst case bounds have eluded researchers for many years.
A breakthrough in the theoretical analysis of the simplex algorithm was obtained by Spielman and
Teng [ST04], who have shown that its smoothed complexity is polynomial, i.e. that the expected
running time under a polynomially small perturbation of an arbitrary instance is polynomial. Kelner
and Spielman [KS06] have used similar techniques to provide for a worst-case polynomial time
simplex algorithm.
?
Work conducted at and funded by the Technion-Micorsoft Electronic Commerce Research Center
1
In this paper we take another step at explaining the success of the simplex algorithm - we show that
for one of the most important and widely used classes of linear programs a simplex algorithm runs
in near linear time.
We note that more efficient algorithms for linear classification exist, e.g. the optimal algorithm of
[CHW10]. The purpose of this paper is to expand our understanding of the simplex method, rather
than obtain a more efficient algorithm for classification.
2
2.1
Preliminaries
Linear classification
Linear classification is a fundamental primitive of machine learning, and is ubiquitous in applications. Formally, we are given a set of vectors-labels pairs {Ai , yi |i ? [n]}, such that Ai ? Rd , yi ?
{?1, +1} has `2 (Euclidean) norm at most one. The goal is to find a hyperplane x ? Rd that partitions the vectors into two disjoint subsets according to their sign, i.e. sign(Ai x) = yi . W.l.o.g we
can assume that all labels are positive by negating the corresponding vectors of negative labels, i.e.
?i yi = 1.
Linear classification can be written as a linear program as follows:
find x ? Rd s.t. ?i ? [n] hAi , xi > 0
(1)
The original linear classification problem is then separable, i.e. there exists a separating hyperplane,
if and only if the above linear program has a feasible solution. Further, any linear program in standard form can be written in linear classification form (1) by elementary manipulations and addition
of a single variable (see [DV08] for more details).
Henceforth we refer to a linear program in format (1) by its coefficient matrix A. All vectors are
column vectors, and we denote inner products by hx, yi. A parameter of paramount importance to
linear classification is the margin, defined as follows
Definition 1. The margin of a linear program in format (1) , such that ?i kAi k ? 1, is defined as
? = ?(A) = max min hAi , xi
kxk?1 i?[n]
We say that the instance A is a ?-margin LP.
Notice that we have restricted x as well as the rows of A to have bounded norm, since otherwise the
margin is ill-defined as it can change by scaling of x. Intuitively, the larger the margin, the easier
the linear program is to solve.
While any linear program can be converted to an equivalent one in form (1), the margin can be exponentially small in the representation. However, in practical applications the margin is usually a
constant independent of the problem dimensions; a justification is given next. Therefore we henceforth treat the margin as a separate parameter of the linear program, and devise efficient algorithms
for solving it when the margin is a constant independent of the problem dimensions.
Support vector machines - why is the margin large ? In real-world problems the data is seldom
separable. This is due to many reasons, most prominently noise and modeling errors.
Hence practitioners settle for approximate linear classifiers. Finding a linear classifier that minimizes the number of classification errors is NP-hard, and inapproximable [FGKP06]. The relaxation
of choice is to minimize the sum of errors, called ?soft-margin SVM? (Support Vector Machine)
[CV95], and is one of the most widely used algorithms in machine learning. Formally, a soft-margin
SVM instance is given by the following mathematical program:
min
X
?i
i
?i ? [n] yi (hx, Ai i + b) + ?i ? 0
kxk ? 1
2
(2)
The norm constraint on x is usually taken to be the Euclidean norm, but other norms are also common such as the `1 or `? constraints that give rise to linear programs.
In this paper we discuss the separable case (formulation (1)) alone. The non-separable case turns out
to be much easier when we allow an additive loss of a small constant to the margin. We elaborate
on this point in Section 6.1. We will restrict our attention only to the case where the bounding norm
of x is the `2 norm as it is the most common case.
2.2
Linear Programming and Smoothed analysis
Smoothed analysis was introduced in [ST04] to explain the excellent performance of the simplex
algorithm in practice. A ?-smooth LP is an LP where each coefficient is perturbed by a Gaussian
noise of variance ? 2 .
In their seminal paper, Spielman and Teng proved the existence of a simplex algorithm that solves
?-smooth LP in polynomial time (polynomial also in ? ?1 ). Consequently, Vershynin [Ver09] presented a simpler algorithm and significantly improved the running time. In the next sections we will
compare our results to the mentioned papers and point out a crucial lemma used in both papers that
will also be used here.
2.3
Statement of our results
For a separable SVM instance of n variables in a space of d dimensions and margin ?, we provide
a simplex algorithm with at most poly(log(n), ??1 ) many pivot steps. Our statement is given for
the `2 -SVM case, that is the case where the vector
p w (see Definition 1) has a bounded `2 norm.
The algorithm achieves a solution with margin O( log(n)/d) when viewed as a separator in the d
dimensional space. However, in an alternative yet (practically) equivalent view, the margin of the
solution is in fact arbitrarily close to ?.
Theorem 1. Let L be
p a separable `2 -SVM instance of dimension d with n examples and margin ?.
Assume that ? > c1 log n/d where c1 is some sufficiently large universal constant. Let 0 < ? < ?
?
be a parameter. The simplex algorithm presented in this paper requires O(nd)
preprocessing time
and poly(??1 , log(n)) pivot steps. The algorithm outputs a subspace V ? Rd of dimension k =
2
?(log(n)/?p
) and a hyperplane within it. The margin of the solution when viewed as a hyperplane
d
in R is O( log(n)/d). When projecting the data points onto V , the margin of the solution is ???.
In words, the above theorem states that when viewed as a classification problem the obtained margin
is almost optimal. We note that when classifying a new point one does not have to project it to the
subspace V , but rather assign a sign according to the classifying hyperplane in Rd .
Tightness of the Generalization Bound In first sight it seems that our result gives a week generalization bound since the margin obtained in the original dimension is low. However, the margin of
the found solution in the reduced dimension (i.e., within V ) is almost optimal (i.e. ? ? ? where ? is
the optimal margin). It follows that the generalization bound essentially the same one obtained by
an exact solution.
LP perspective and the smoothed analysis framework As mentioned earlier, any linear program
can be viewed as a classification LP by introducing a single new variable. Furthermore, any solution
with a positive margin translates into an optimal solution to the original LP. Our algorithm solves
the classification LP in a sub-optimal manner in the sense that it does not find a separator with an
optimal margin. However, in the perspective of a general LP solver1 , the solution is optimal as any
positive margin suffices. It stands to reason that in many practical settings the margin of the solution
is constant or polylogarithmically small at worst. In such cases, our simplex algorithm solves the LP
by using at most a polylogarithmic number of pivot steps. We further mention that without the large
margin assumption, in the smoothed analysis framework it is known ([BD02], Lemma 6.2) that the
margin is w.h.p. polynomially bounded by the parameters. Hence, our algorithm runs in polynomial
time in the smoothed analysis framework as well.
1
The statement is true only for feasibility LPs. However, any LP can be transformed into a feasibility LP by
performing a binary search for its solution value.
3
3
Our Techniques
The process involves five preliminary steps. Reducing the dimension, adding artificial constraints
to bound the norm of the solution, perturbing the low dimensional LP, finding a feasible point and
shifting the polytope. The process of reducing the dimension is standard. We use the Johnson and
Lindenstrauss Lemma [JL84] to reduce the dimension of the data points from d to k = O(log(n)/?2 )
where ? is an error parameter that can be considered as a constant. This step reduces the time
complexity by reducing both the number and running time of the pivot steps. In order to bound
the `2 norm of the original vector, we boundp
the `? norm of the low dimensional vector. This will
eventually result in a multiplicative loss of log(k) to the margin. We note that we could have
avoided this loss by bounding the `1 norm of the vector at the cost of a more technically involved
proof. Specifically, one should bound the `1 norm of the embedding of the vector into a space where
the `1 and `2 norms behave similarly, up to a multiplicative distortion of 1??. Such an embedding of
2
`k2 in `K
1 exists for K = O(k/? ) [Ind00]. Another side effect is a larger constant in the polynomial
dependence of ? in the running time.
The perturbation step involves adding a random Gaussian noise vector to the matrix of constraints,
where the amplitude of each row is determined by the norm of the corresponding constraint vector.
This step ensures the bound on the number of pivot step performed by the simplex algorithm. In
order to find a feasible point we exploit the fact that when the margin is allowed to be negative, there
is always a feasible solution. We prove for a fixed set of constraints, one of which is a negative lower
bound on the margin, that the corresponding point v0 is not only feasible but is the unique optimal
solution for fixed direction. The direction is independent of the added noise, which is a necessary
property when bounding the number of pivot steps.
Our final step is a shift of the polytope. Since we use the shadow vertex pivot rule we must have
an LP instance for which 0 is an interior point of the polytope. This property is not held for our
polytope as the LP contains inequalities of the form ha, xi ? 0. However, we prove that both 0
and v0 are feasible solution to the LP that do not share a common facet. Hence, their average is
an interior point of the polytope and a shift by ?v0 /2 would ensure that 0 is an interior point as
required.
Once the preprocessing is done we solve the LP via the shadow vertex method which is guaranteed
to finish after a polylogarithmic number of pivot steps. Given a sufficiently small additive noise and
sufficiently large target dimension we are guaranteed that the obtained
solution is an almost optimal
p
? k/d) approximation to the higher
solution to the unperturbed low dimensional problem and a O(
dimensional problem.
4
4.1
Tool Set
Dimension reduction
The Johnson-Lindenstrauss Lemma [JL84] asserts that one can project vectors onto a lower dimensional space and roughly preserve their norms, pairwise distances and inner products. The following
is an immediate consequence of Theorem 2.1 and Lemma 2.2 of [DG03].
Theorem 2. Let ? 0 and let k, d be integers where d > k > 9/?2 . Consider a linear projection
M : Rd 7? Rk onto a uniformly chosen subspace2 . For any pair of fixed vector u, v ? Rd where
kuk, kvk ? 1, it holds that
Pr kuk2 ? kM uk2 > ? < exp(?k?2 /9)
Pr [|hu, vi ? hM u, M vi| > 3?] < 3 exp(?k?2 /9)
4.2
The number of vertices in the shadow of a perturbed polytope
A key lemma in the papers of [ST04, Ver09] is a bound on the expected number of vertices in the
projection of a perturbed polytope onto a plane. The following geometric theorem is will be used in
our paper:
2
Alternatively, M can be viewed as the composition of a random rotation U followed by taking the first k
coordinates
4
Theorem 3 ([Ver09] Theorem 6.2). Let A1 , ..., An be independent Gaussian vectors in Rd with
centers of norm at most 1, and whose varaince satisfies:
?2 ?
1
36d log n
Let E be a fixed plane in Rd . Then the random polytope P = conv(0, A1 , ..., An ) satisfies
E[| edges(P ? E)|] = O(d3 ? ?4 )
4.3
The shadow vertex method
The shadow vertex method is a pivot rule used to solve LPs. In order to apply it, the polytope of the
LP must have 0 as an interior point. Algebraically, all the inequalities must be of the form ha, xi ? 1
(or alternatively ha, xi ? b where b > 0). The input consists of a feasible point v in the polytope
and a direction u in which it is farthest, compared to all other feasible points. In a nutshell, the
method involves gradually turning the vector u towards the direction of the target direction c, while
traversing through the optimal solutions to the temporary direction at every stage. For more on the
shadow vertex method we refer the reader to [ST04], Section 3.2
The manner in which Theorem 3 is used, both in the above mentioned papers and the current one, is
the following. Consider an LP of the form
max c> x
?i ? [n] hAi , xi ? 1
When solving the LP via the shadow vertex method, the number of pivot steps is upper bounded by
the number of edges in P ? E where P = conv(0, A1 , ..., An ) and E is the plane spanned by the
target direction c and the initial direction u obtained in the phase-1 step.
5
Algorithm and Analysis
Our simplex variant is defined in Algorithm 1 below. It is composed of projecting the polytope
onto a lower dimension, adding noise, finding an initial vertex (Phase 1), shifting and applying the
shadow vertex simplex algorithm [GS55].
Theorem 4. Algorithm 1 performs an expected number of O(poly(log n, ?1 )) pivot steps. Over
?
instance A with ?-margin it returns, with probability at least 1 ? O( k1 + n1 ), a feasible solution x
?
with margin ?( ??d logk k ).
Note that the algorithm requires knowledge of ?. This can be overcome with a simple binary search.
To prove Theorem 4, we first prove several auxilary lemmas. Due to space restrictions, some of the
proofs are replaced with a brief sketch.
Lemma 5. With probability at least 1 ? 1/k there
? exists a feasible solution to LPbounded , denoted
(?
x, ? ) that satisfies ? ? ? ? ? and k?
xk? ? 5
log(k)
?
.
k
Proof Sketch. Since A has margin ?, there exists x? ? Rd such that ?i . hAi , x? i ? ? and kx? k2 =
1. We use Theorem 2 to show that the projection of x? has, w.h.p., both a large margin and a small
`? norm.
Denote the k + 1 dimensional noise vectors that were added in step 3 by err1 , . . . , errn+2k . The
following lemma will provide some basic facts that occur w.h.p. for the noise vectors. The proof
is an easy consequence of the 2-stability of Gaussians, and standard tail bounds of the Chi-Squared
distribution and is thus omitted.
Lemma 6. Let err1 , . . . , errn+2k be defined as above:
p
1. w.p. at least 1 ? 1/n, ?i, kerri k2 ? O(? k log(n)) ? 201?k
5
Algorithm 1 large margin simplex
1: Input: a ?-margin LP instance A.
9?162 log(n/?)
, ? 2 = 100k log1 k log n .
2: Let ? = ?6 , k =
?2
3: (step 1: dimension reduction) Generate M ? Rk?d
q, a projection onto a random k-dimensional
n?(k+1)
?
?
subspace. Let A ? R
be given by Ai = ( d MAi , ?1)
k
?
log k
?
,
k
4: (step 2: bounding kxk) Add the k constraints hei , xi ? ? 6
the k constraints h?ei , xi ?
p
and one additional constraint ? ? ?8 log(k). Denote the coefficient vectors
? n+1 , . . . , A
? n+2k and A
? 0 correspondingly. We obtain the following LP denoted by LPbounded ,
A
?
k
,
? 6 ?log
k
max hek+1 , (x, ? )i
D
E
? i , (x, ? ) ? bi
?i ? [0, . . . , n + 2k] . A
(3)
5: (step 3: adding noise) Add a random independently distributed Gaussian noise to every entry of
? 0 according to N (0, ? 2 ?kA
? i k2 ). Denote the resulting constraint
every constraint vector except A
2
?
vectors as Ai . Denote the resulting LP by LPnoise .
6: (step 4: phase-1): Let v0 ? Rk+1 be the vertex for which inequalities 0, n + k + 1, . . . , n + 2k
?
are held as equalities. Define u0 ? Rk+1 as u0 = (1, . . . , 1, ?1).
?
7: (step D5: shiftingE the polytope) For all i ? [0, . . . , n + 2k], change the value of bi to ?
bi =
? i , v0 /2 .
bi + A
8: (step 6 - shadow vertex simplex): Let E = span(u0 , ek+1 ).
Apply the shadow vertex simplex algorithm on the polygon which is the projection of conv{V}, where V =
? 0 /?b0 , A
? 1 /?b1 , . . . , A
? n+2k /?bn+2k }, onto E. Let the solution be x
?.
{0, A
? = M> (?
x + v0 /2)
9: return k?xx?k2 where x
2. Fix some I ? [n + 2k] of size |I| = k and define BI be the (k + 1) ? (k + 1) matrix whose
first k columns consist of {erri }i?I and k + 1 column is the 0 vector. W.p. at least 1 ? 1/n
it holds that the top singular value of BI is at most 1/2. Furthermore, w.p. at least 1 ? 1/n
1
the 2-norms of the rows of B are upper bounded by 4?k+1
.
? A,
? x
? ? Rk be as above. Then with probability at least 1 ? O(1/k):
Lemma 7. Let A,
1. for ? = ? ? 2?, the point (?
x, ? ) ? Rk+1 is a feasible solution of LPnoise .
2. for every x ? Rk , ? ? R where (x, ? ) is a feasible solution of the LPnoise it holds that
p
D
E D
E ?log k
log(k)
?
?
,
?i . Ai , (x, ? ) ? Ai , (x, ? ) ? ?
kxk? ? 7 ?
k
k
Proof of this Lemma is deferred to the full version of this paper.
Lemma 8. with probability 1?O(1/k), the vector v0 is a basic feasible solution (vertex) of LPnoise
proof sketch. The vector v0 is a basic solution as it is defined by k + 1 equalities. To prove that is
feasible we exploit the fact that the last entry corresponding to ? is sufficiently small and that all of
the constraints are of the form ha, xi ? ? .
The next lemma provides us with a direction u0 for which v0 is the unique optimal solution w.r.t to
the objective maxx?P hu0 , xi, where P is the polytope of LPnoise . The vector u0 is independent of
the added noise. This is crucial for the following steps.
Lemma 9. Let u0 = (1, . . . , 1, ?1). With probability at least 1 ? O(1/n), the point v0 is the
optimal solution w.r.t to the objective maxx?P hu0 , xi, where P is the polytope of LPnoise .
6
Proof Sketch. The set of points u in which v0 is the optimal solution is defined by a (blunt) cone
P
? n+k+i for i ? [k], ak+1 = ?A
? 0 . Consider the cone
{ i ?i ai | ?i, ?i > 0}, where ai = ?A
?
corresponding to the constraints A; u0 resides in its interior, far away from its boarders. Specifically,
Pk
? n+k+i ) + (?A
? 0 ). Since the difference between A
? i and A
? i is small w.h.p., we get
u0 = i=1 (?A
that u0 resides, w.h.p., in the cone of points in which v0 is optimal, as required.
Lemma 10. The point v0 /2 is a feasible interior point of the polytope with probability at least
1 ? O(1/n).
Proof. By Lemma 9, v0 is a feasible point. Also, according to its definition it is clear that w.p 1, it
lies on k + 1 facets of the polytope, neither of which contains the point 0. In other words, no facet
contains both v0 and 0. Since 0 is clearly a feasible point of the polytope, we get that v0 /2 is a
feasible interior point as claimed.
Proof of Theorem 4. We first note that in order to use the shadow vertex method, 0 must be an
interior point of the polytope. This does not happen in the original polytope, hence the shift of step
5. Indeed according to Lemma 10, v0 /2 is an interior point of the polytope, and by shifting it to 0,
the shadow vertex method can indeed be implemented.
We will assume that the statements of the auxiliary lemmas are held. This happens with probability
at least 1 ? O( k1 + n1 ), which is the stated success probability of the algorithm. By Lemma 7,
LPnoise has a basic feasible solution with ? ? ? ? 2?. The vertex v0 , along with the direction u0
which it optimizes, is a feasible starting vector for the shadow vertex simplex algorithm on the plane
E, and hence applying the simplex algorithm with the shadow vertex pivot D
rule will return
E a basic
0
0
?
feasible solution in dimension k + 1, denoted (?
x, ? ), for which ?i ? [n] . Ai , (?
x, ? ) ? 0 and
? 0 ? ? ? 2?. Using Lemma 7 part two, we have that for all i ? [n],
E ?log k
E D
D
0
0
?
?
? i ? ? ? 3?.
x, ? ) ? ?
Ai , (?
x, ? ) ? Ai , (?
? ?? ? hMAi , x
(4)
k
p
p
>?
? = d/kM> x
? , we get that for all i ? [n], hAi , x
? i = d/kA>
?i ?
Since x
= hf (Ai ), x
i M x
? ? 3? and this provides a solution to the original LP.
To compute the margin of this solution, note that
orthonormal set.
p the rows of M consist of an p
? k2 = k?
xk2 ? 7 log(k) meaning
that
k?
x
k
?
7
log(k)d/k. It
Hence, by Lemma 7, kM > x
2
p
?
follows that the margin of the solution is at least ? (? ? 3?) ? k/(7 log(k)d)
Running time: The number of steps in this simplex step is bounded by the number of vertices in
the polygon which is the projection of the polytope of LPnoise onto the plane E = span{u0 , vT }.
? n+2k . Since all of the points in V are perturbed, the number of vertices in the polygon
Let V = {A}
i=1
11
?
conv(V) ? E is bounded w.h.p. as in Theorem 3 by O(k 3 ? ?4 ) = O(log
(n)/?14 ). Since the
? 0 reside in the plane E, the the number of vertices of (conv(V ? {0, A
? 0 })) ? E is at
points 0, A
most the number of vertices in conv(V) ? E plus 4, which is asymptotically the same. Each pivot
14
?
step in the shadow vertex simplex method can be implemented to run in time O(nk) = O(n/?
)
?
for n constraints in dimension k. The dimension reduction step required O(nd)
time. All other
operations including adding noise and shifting the polytope are faster than the shadow vertex simplex
?
procedure, leading to an overall running time of O(nd)
(assuming ? is a constant or sub polynomial
in d).
proof of Theorem 1. The statement regarding the margin of the solution, viewed as a point in Rd is
immediate from Theorem 4. To prove the claim regarding the view in the low dimensional space,
consider Equation 4 in the above proof. Put in words, it states the following: Consider the projection M of the algorithm (or alternatively its image V ) and the classification problem of the points
? ) is at least ? ? 3?.
projected onto V . The margin of the solutionpproduced by the algorithm (i.e., of x
? of
The `? -norm x
is
clearly
bounded
by
O(
log(k)/k).
Hence,
the
margin
of
the
normalized point
p
? /k?
x
xk2 is ?(?/ log(k)). In order to achieve a margin of ? ? O(?), one should replace the `?
bound in the LP with an approximate `2 bound. This can be done via linear constraints by bounding
7
the `1 norm
F : Rk ? RK , K = O(k/?2 ) and F has the property that for every
of F x where
k kF xk1
x ? R , kxk2 ? 1 < ?. A properly scaled matrix of i.i.d. Gaussians has this property [Ind00].
p
This step would eliminate the need for the extra log(k) factor. The other multiplicative constants
? is at most 1 + O(?), by assigning a slightly
can be reduced to 1 + O(?), thus ensuring the norm of x
? is bounded by 1 + O(?), the
smaller value for ?; specifically, ?/? would do. Once the 2-norm of x
margin of the normalized point is ? ? O(?).
6
Discussion
The simplex algorithm for linear programming is a cornerstone of operations research whose computational complexity remains elusive despite decades of research. In this paper we examine the
simplex algorithm in the lens of machine learning, and in particular via linear classification, which
is equivalent to linear programming. We show that in the cases where the margin parameter is large,
say a small constant, we can construct a simplex algorithm whose worst case complexity is (quasi)
linear. Indeed in many practical problems the margin parameter is a constant unrelated to the other
parameters. For example, in cases where a constant inherent noise exists, the margin must be large
otherwise the problem is simply unsolvable.
6.1
soft margin SVM
In the setting of this paper, the case of soft margin SVM turns out to be algorithmically easier to
solve than the separable case. In a nutshell, the main hardship in the separable case is that a large
number of data points may be problematic. This is since the separating hyperplane must separate
all of the points and not most of them, meaning that every one of the data points must be taken in
consideration. A more formal statement is the following. In our setting we have three parameters.
The number of points n, the dimension d and the ?sub optimality? ?. In the soft margin (e.g. hinge
loss) case, the number of points may be reduced to poly(??1 ) by elementary methods. Specifically,
it is in easy task to prove that if we omit all but a random subset of log(??1 )/?2 data points, the
hinge loss corresponding to the obtained separator w.r.t the full set of points will be O(?). In fact,
it suffices to solve the problem with the reduced number of points, up to an additive loss of ? to the
margin to obtain the same result. As a consequence of the reduced number of points, the dimension
can be reduced, analogously to the separable case to d0 = O(log(??1 )/?2 ).
The above essentially states that the original problem can be reduced, by performing a single pass
over the input (perhaps even less than that), to one where all the only parameter is ?. From this
point, the only challenge is to solve the resulting LP, up to an ? additive loss to the optimum, in time
polynomial to its size. There are many methods available for this problem.
To conclude, the soft margin SVM problem is much easier than the separable case hence we do not
analyze it in this paper.
References
[BD02]
A. Blum and J. Dunagan. Smoothed analysis of the perceptron algorithm for linear programming. In Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete
algorithms, pages 905?914. Society for Industrial and Applied Mathematics, 2002.
[CHW10] Kenneth L. Clarkson, Elad Hazan, and David P. Woodruff. Sublinear optimization for
machine learning. In FOCS, pages 449?457. IEEE Computer Society, 2010.
[CV95]
Corinna Cortes and Vladimir Vapnik. Support-vector networks. In Machine Learning,
pages 273?297, 1995.
[Dan51]
G. B. Dantzig. Maximization of linear function of variables subject to linear inequalities.
Activity Analysis of Production and Allocation, pages 339?347, 1951.
[DG03]
Sanjoy Dasgupta and Anupam Gupta. An elementary proof of a theorem of johnson and
lindenstrauss. Random Struct. Algorithms, 22:60?65, January 2003.
[DV08]
John Dunagan and Santosh Vempala. A simple polynomial-time rescaling algorithm for
solving linear programs. Math. Program., 114(1):101?114, 2008.
8
[FGKP06] Vitaly Feldman, Parikshit Gopalan, Subhash Khot, and Ashok Kumar Ponnuswami.
New results for learning noisy parities and halfspaces. In FOCS, pages 563?574. IEEE
Computer Society, 2006.
[GS55]
S. Gass and T. Saaty. The computational algorithm for the parameteric objective function. Naval Research Logistics Quarterly, 2:39?45, 1955.
[Ind00]
P. Indyk. Stable distributions, pseudorandom generators, embeddings and data stream
computation. In Foundations of Computer Science, 2000. Proceedings. 41st Annual
Symposium on, pages 189?197. IEEE, 2000.
[JL84]
W. B. Johnson and J. Lindenstrauss. Extensions of lipschitz mapping into hilbert space.
Contemporary Mathematics, 26:189?206, 1984.
[KS06]
Jonathan A. Kelner and Daniel A. Spielman. A randomized polynomial-time simplex
algorithm for linear programming. In Proceedings of the thirty-eighth annual ACM
symposium on Theory of computing, STOC ?06, pages 51?60, New York, NY, USA,
2006. ACM.
[Meg86]
Nimrod Megiddo. Improved asymptotic analysis of the average number of steps performed by the self-dual simplex algorithm. Math. Program., 35:140?172, June 1986.
[ST04]
Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis of algorithms: Why the
simplex algorithm usually takes polynomial time. J. ACM, 51:385?463, May 2004.
[Ver09]
Roman Vershynin. Beyond hirsch conjecture: Walks on random polytopes and smoothed
complexity of the simplex method. SIAM J. Comput., 39(2):646?678, 2009.
9
| 4524 |@word version:1 polynomial:20 norm:23 seems:1 nd:3 solver1:1 km:3 hu:1 bn:1 mention:1 reduction:3 initial:2 contains:3 woodruff:1 daniel:2 current:1 com:1 ka:2 yet:1 assigning:1 written:2 must:7 john:1 additive:4 partition:1 happen:1 designed:1 alone:1 plane:6 xk:1 provides:2 math:2 kelner:2 simpler:1 five:1 mathematical:2 along:1 symposium:3 focs:2 prove:8 consists:1 manner:2 pairwise:1 indeed:3 expected:3 roughly:1 examine:1 chi:1 conv:6 project:2 xx:1 bounded:9 unrelated:1 israel:1 minimizes:1 finding:3 every:6 nutshell:2 megiddo:1 classifier:2 k2:6 scaled:1 farthest:1 omit:1 hirsh:1 positive:3 referenced:1 treat:1 consequence:3 despite:2 ak:1 plus:1 therein:1 dantzig:1 hek:1 bi:6 practical:4 commerce:1 unique:2 thirty:1 practice:1 parameteric:1 procedure:1 universal:1 maxx:2 significantly:1 projection:7 word:3 get:3 onto:9 close:1 interior:9 put:1 applying:2 seminal:1 restriction:1 equivalent:3 center:2 elusive:1 primitive:1 attention:1 starting:1 independently:1 rule:5 d5:1 spanned:1 orthonormal:1 embedding:2 stability:1 st04:5 coordinate:1 justification:1 target:3 today:1 exact:1 programming:10 invented:1 worst:6 ensures:1 contemporary:1 halfspaces:1 mentioned:3 complexity:6 solving:3 technically:1 various:2 polygon:3 artificial:1 whose:4 elad:2 widely:2 kai:1 say:2 larger:2 otherwise:2 solve:6 tightness:1 distortion:1 noisy:1 final:1 indyk:1 product:2 achieve:1 asserts:1 optimum:1 ac:1 b0:1 solves:3 implemented:2 auxiliary:1 involves:3 shadow:16 direction:10 errn:2 settle:1 hx:2 assign:1 suffices:2 generalization:3 fix:1 preliminary:2 elementary:3 extension:1 hold:3 practically:2 sufficiently:4 considered:1 exp:2 mapping:1 week:1 claim:1 achieves:1 omitted:1 xk2:2 purpose:1 combinatorial:1 label:3 tool:1 clearly:2 gaussian:4 sight:1 always:1 rather:2 june:1 naval:1 properly:1 tech:1 contrast:1 industrial:1 sense:1 inst:1 eliminate:1 expand:1 transformed:1 hu0:2 quasi:1 provably:1 overall:3 classification:18 ill:1 dual:1 denoted:3 yahoo:1 breakthrough:1 santosh:1 once:2 karnin:1 construct:1 khot:1 simplex:40 np:1 inherent:1 roman:1 composed:1 preserve:1 parikshit:1 replaced:1 geometry:1 phase:3 n1:2 attempt:1 interest:1 deferred:1 sixty:1 kvk:1 held:3 edge:2 ehazan:1 necessary:1 traversing:1 euclidean:2 walk:1 haifa:2 ymail:1 theoretical:2 instance:8 column:3 modeling:1 soft:6 facet:4 negating:1 earlier:1 maximization:1 cost:1 introducing:1 vertex:25 subset:2 entry:2 technion:3 conducted:1 johnson:4 perturbed:4 micorsoft:1 vershynin:2 st:1 fundamental:2 siam:2 randomized:1 ie:1 analogously:1 squared:1 henceforth:2 ek:1 leading:1 return:3 rescaling:1 converted:1 coefficient:3 vi:2 stream:1 multiplicative:3 view:2 performed:2 hazan:2 analyze:2 hf:1 minimize:1 il:1 variance:1 who:1 famous:1 researcher:1 hardship:1 explain:1 definition:3 involved:1 proof:12 proved:2 knowledge:1 ubiquitous:1 hilbert:1 amplitude:1 higher:1 improved:2 formulation:2 done:2 strongly:1 furthermore:2 xk1:1 stage:1 sketch:4 ei:1 perhaps:2 usa:1 effect:1 normalized:2 true:1 hence:8 equality:2 self:1 performs:4 geometrical:1 meaning:2 image:1 consideration:1 common:3 rotation:1 perturbing:1 ponnuswami:1 exponentially:1 tail:1 refer:2 composition:1 feldman:1 ai:14 rd:11 seldom:1 mathematics:2 similarly:1 funded:1 stable:1 v0:18 add:2 perspective:2 optimizes:1 manipulation:1 claimed:1 inequality:4 binary:2 success:2 arbitrarily:1 vt:1 yi:6 devise:2 additional:1 ashok:1 algebraically:1 u0:11 relates:1 full:2 reduces:1 d0:1 smooth:2 faster:1 a1:3 feasibility:2 ensuring:1 variant:4 basic:5 essentially:2 c1:2 addition:1 thirteenth:1 polylogarithmically:1 singular:1 crucial:2 extra:1 subject:1 vitaly:1 practitioner:1 integer:1 near:3 easy:2 embeddings:1 finish:1 restrict:1 inner:2 reduce:1 regarding:2 translates:1 shift:3 pivot:19 whether:1 passed:1 clarkson:1 york:1 deep:1 cornerstone:2 useful:1 clear:1 gopalan:1 diameter:1 reduced:7 generate:1 mai:1 nimrod:1 exist:1 problematic:1 notice:1 uk2:1 sign:3 disjoint:1 algorithmically:1 discrete:1 dasgupta:1 key:1 blum:1 d3:1 neither:1 kuk:1 kenneth:1 asymptotically:1 relaxation:1 year:2 sum:1 cone:3 run:5 unsolvable:1 almost:3 reader:1 electronic:1 scaling:1 ks06:3 bound:14 guaranteed:3 followed:1 paramount:2 annual:3 activity:1 occur:1 constraint:15 erri:1 min:2 span:2 optimality:1 performing:2 separable:10 vempala:1 kumar:1 format:2 conjecture:2 pseudorandom:1 according:5 smaller:1 slightly:1 lp:28 happens:1 intuitively:1 restricted:1 projecting:2 pr:2 gradually:1 taken:2 equation:1 previously:1 hei:1 discus:1 turn:2 eventually:1 remains:1 available:1 operation:3 gaussians:2 apply:2 quarterly:1 away:1 alternative:1 corinna:1 anupam:1 struct:1 existence:1 original:7 top:1 running:8 ensure:1 hinge:2 exploit:2 k1:2 society:3 objective:3 question:1 added:3 dependence:1 hai:5 saaty:1 subspace:3 distance:1 separate:2 separating:2 polytope:23 reason:2 zkarnin:1 assuming:1 vladimir:1 statement:6 stoc:1 negative:3 rise:1 blunt:1 stated:1 unknown:1 dunagan:2 upper:2 behave:1 auxilary:1 january:1 immediate:2 gas:1 logistics:1 ever:1 perturbation:2 smoothed:9 arbitrary:1 introduced:1 david:1 namely:1 required:4 pair:2 eluded:1 polylogarithmic:4 polytopes:2 temporary:1 zohar:1 beyond:1 usually:3 below:1 eighth:1 challenge:1 program:17 max:3 including:1 shifting:4 turning:1 brief:1 numerous:2 hm:1 log1:1 understanding:1 geometric:1 kf:1 asymptotic:1 loss:7 sublinear:1 interesting:1 allocation:1 generator:1 foundation:1 article:1 classifying:2 share:1 production:1 row:4 last:1 parity:1 side:1 allow:1 formal:1 perceptron:1 explaining:1 taking:1 correspondingly:1 distributed:1 overcome:1 dimension:20 world:1 stand:1 lindenstrauss:4 resides:2 author:1 made:1 reside:1 preprocessing:2 avoided:1 projected:1 far:1 polynomially:2 approximate:2 hirsch:1 b1:1 conclude:1 xi:11 alternatively:3 continuous:1 search:2 decade:1 why:2 excellent:1 poly:5 separator:3 pk:1 main:1 bounding:5 noise:13 arise:1 allowed:1 elaborate:1 ny:1 sub:4 prominently:1 lie:1 kxk2:1 comput:1 theorem:16 rk:9 kuk2:1 subspace2:1 unperturbed:1 svm:8 cortes:1 gupta:1 exists:6 consist:2 vapnik:1 adding:5 importance:1 logk:1 margin:55 kx:1 nk:1 easier:4 simply:1 kxk:4 hua:1 satisfies:3 acm:4 goal:1 viewed:6 consequently:1 towards:1 replace:1 lipschitz:1 subhash:1 feasible:21 change:2 hard:1 inapproximable:1 specifically:4 determined:1 reducing:3 uniformly:1 hyperplane:6 except:1 lemma:22 shang:1 called:2 teng:3 lens:1 pas:1 sanjoy:1 formally:2 support:3 jonathan:1 spielman:5 |
3,894 | 4,525 | Patient Risk Stratification for Hospital-Associated
C. diff as a Time-Series Classification Task
Jenna Wiens
[email protected]
John V. Guttag
[email protected]
Eric Horvitz
[email protected]
Abstract
A patient?s risk for adverse events is affected by temporal processes including the
nature and timing of diagnostic and therapeutic activities, and the overall evolution of the patient?s pathophysiology over time. Yet many investigators ignore this
temporal aspect when modeling patient outcomes, considering only the patient?s
current or aggregate state. In this paper, we represent patient risk as a time series. In doing so, patient risk stratification becomes a time-series classification
task. The task differs from most applications of time-series analysis, like speech
processing, since the time series itself must first be extracted. Thus, we begin
by defining and extracting approximate risk processes, the evolving approximate
daily risk of a patient. Once obtained, we use these signals to explore different
approaches to time-series classification with the goal of identifying high-risk patterns. We apply the classification to the specific task of identifying patients at risk
of testing positive for hospital acquired Clostridium difficile. We achieve an area
under the receiver operating characteristic curve of 0.79 on a held-out set of several hundred patients. Our two-stage approach to risk stratification outperforms
classifiers that consider only a patient?s current state (p<0.05).
1
Introduction
Time-series data are available in many different fields, including medicine, finance, information retrieval and weather prediction. Much research has been devoted to the analysis and classification of
such signals [1] [2]. In recent years, researchers have had great success with identifying temporal
patterns in such time series and with methods that forecast the value of variables. In most applications there is an explicit time series, e.g., ECG signals, stock prices, audio recordings, or daily
average temperatures.
We consider a novel application of time-series analysis, patient risk. Patient risk has an inherent
temporal aspect; it evolves over time as it is influenced by intrinsic and extrinsic factors. However, it
has no easily measurable time series. We hypothesize that, if one could measure risk over time, one
could learn patterns of risk that are more likely to lead to adverse outcomes. In this work, we frame
the problem of identifying hospitalized patients for high-risk outcomes as a time-series classification
task. We propose and motivate the study of patient risk processes to model the evolution of risk over
the course of a hospital admission.
Specifically, we consider the problem of using time-series data to estimate the risk of an inpatient
becoming colonized with Clostridium difficile (C. diff ) during a hospital stay. (C. diff is a bacterial
infection most often acquired in hospitals or nursing homes. It causes severe diarrhea and can lead to
colitis and other serious complications.) Despite the fact that many of the risk factors are well known
(e.g., exposure, age, underlying disease, use of antimicrobial agents, etc.) [3], C. diff continues to
be a significant problem in many US hospitals. From 1996 to 2009, C. diff rates for hospitalized
patients aged ? 65 years increased by 200% [4].
1
There are well-established clinical guidelines for predicting whether a test for C. diff is likely to
be positive [5]. Such guidelines are based largely on the presence of symptoms associated with
an existing C. diff infection, and thus are not useful for predicting whether a patient will become
infected. In contrast, risk stratification models aim to identify patients at high risk of becoming
infected. The use of these models could lead to a better understanding of the risk factors involved
and ultimately provide information about how to reduce the incidence of C. diff in hospitals.
There are many different ways to define the problem of estimating risk. The precise definition has
important ramifications for both the potential utility of the estimate and the difficulty of the problem.
Reported results in the medical literature for the problem of risk stratification for C. diff vary greatly,
with areas under the receiver operating characteristic curve (AUC) of 0.628-0.896 [6] [7][8][9][10].
The variation in classification performance is based in part on differences in the task definition, in
part on differences in the study populations, and in part on the evaluation method. The highest
reported AUCs were from studies of small (e.g., 50 patients) populations, relatively easy tasks (e.g.,
inclusion of large number of patients with predictably short stays, e.g., patients in labor), or both.
Additionally, some of the reported results were not obtained from testing on held-out sets.
We consider patients with at least a 7-day hospital admission who do not test positive for C. diff
until day 7 or later. This group of patients is already at an elevated risk for acquiring C. diff because
of the duration of the hospital stay. Focusing on this group makes the problem more relevant (and
more difficult) than other related tasks.
To the best of our knowledge, representing and studying the risk of acquiring C. diff (or any other
infection) as a time series has not previously been explored. We propose a risk stratification method
that aims to identify patterns of risk that are more likely to lead to adverse outcomes. In [11] we
proposed a method for extracting patient risk processes. Once patient risk processes are extracted,
the problem of risk stratification becomes that of time-series classification. We explore a variety
of different methods including classification using similarity metrics, feature extraction, and hidden
Markov models. A direct comparison with the reported results in the literature for C. diff risk
prediction is difficult because of the differences in the studies mentioned above. Thus, to measure
the added value of considering the temporal dimension, we implemented the standard approach as
represented in the related literature of classifying patients based on their current or average state and
applied it to our data set. Our method leads to a significant improvement over this more traditional
approach.
2
The Data
Our dataset comes from a large US hospital database. We extracted all stays >= 7days, from all
inpatient admissions that occurred over the course of a year.
To ensure that we are in fact predicting the acquisition of C. diff during the current admission, we
remove patients who tested positive for C. diff in the 60 days preceding or, if negative, following the
current admission [3]. In addition, we remove patients who tested positive for C. diff before day 7
of the admission. Positive cases are those patients who test positive on or after 7 days in the hospital.
Negative patients are all remaining patients.
We define the start of the risk period of a patient as the time of admission and define the end of
the risk period, according to the following rule: if the patient tests positive, the first positive test
marks the end of the risk period, otherwise the patient is considered at risk until discharge. The final
population consisted of 9,751 hospital admissions and 8,166 unique patients. Within this population,
177 admissions had a positive test result for C. diff.
3
Methods
Patient risk is not a directly measurable time series. Thus, we propose a two-stage approach to risk
stratification. We first extract approximate risk processes and then apply time-series classification
techniques to those processes. Both stages are described here; for more detail regarding the first
stage we direct the reader to [11].
2
3.1
Extracting Patient Risk Processes
We extract approximate patient risk processes, i.e., a risk time series for each admission, by independently calculating the daily risk of a patient and then concatenating these predictions. We begin
by extracting more than 10,000 variables for each day of each hospital admission. Almost all of
the features pertain to categorical features that have been exploded into binary features; hence the
high dimensionality. Approximately half of the features are based on data collected at the time of
admission e.g., patient history, admission reason, and patient demographics. These features remain
constant throughout the stay. The remaining features are collected over the course of the admission and may change on a daily basis e.g., lab results, room location, medications, and vital sign
measurements.
We employ a support vector machine (SVM) to produce daily risk scores. Each day of an admission
is associated with its own feature vector. We refer to this feature vector of observations as the
patient?s current state. However, we do not have ground-truth labels for each day of a patient?s
admission. We only know whether or not a patient eventually tests positive for C. diff. Thus we
assign each day of an admission in which the patient eventually tests positive as positive, even though
the patient may not have actually been at high risk on each of those days. In doing so, we hope to
identify high-risk patients as early as possible. Since we do not expect a patient?s risk to remain
constant during an entire admission, there is noise in the training labels. For example, there may be
some days that look almost identical in the feature space but have different labels. To handle this
noise we use a soft-margin SVM, that allows for misclassifications. As long as our assumption does
not lead to more incorrect labels than correct labels, it is possible to learn a meaningful classifier,
despite the approximate labels. We do not use the SVM as a classifier but instead consider the
continuous prediction made by the SVM, i.e., the distance to the decision boundary. We take the
concatenated continuous outputs of the SVM for a hospital admission as a representation of the
approximate risk process. We give some examples of these approximate risk processes for both case
and non-case patients in Figure 1.
Approximate Risk
1.5
1.5
1
1
0.5
0.5
0
0
?0.5
Patient tests positive
?1
?0.5
?1.5
?1
5
10
15
20
25
Time (days)
Patient is discharged
5
10
15
20
Time (days)
25
30
35
40
Figure 1: Approximate daily risk represented as a time series results in a risk process for each
patient.
One could risk stratify patients based solely on their current state, i.e., use the daily risk value from
the risk process to classify patients as either high risk or low risk on that day. This method, which
ignores the temporal evolution of risk, achieves an AUC of 0.69 (95% CI 0.61-0.77). Intuitively,
current risk should depend on previous risk. We tested this intuition by classifying patients based
on the average of their risk process. This performed significantly better achieving an AUC of 0.75
(95% CI 0.69-0.81). Still, averaging in this way ignores the possibility of leveraging richer temporal
patterns, as discussed in the next section.
3.2
Classifying Patient Risk Processes
Given the risk processes of each patient, the risk stratification task becomes a time-series classification task. Time-series classification is a well-investigated area of research, with many proposed
methods. For an in-depth review of sequence classification we refer the reader to [2]. Here, we
explore three different approaches to the problem: classification based on feature vectors, similarity
measures, and finally HMMs. We first describe each method, and then present results about their
performance in Section 4.
3
3.2.1
Classification using Feature Extraction
There are many different ways to extract features from time series. In the literature many have
proposed time-frequency representations extracted using various Fourier or wavelet transforms [12].
Given the small number of samples composing our time-series data, we were wary of applying such
techniques. Instead we chose an approach inspired by the combination of classifiers in the text
domain using reliability indicators [13]. We define a feature vector based on different combinations
of the predictions made in the first stage. We list the features in Table 1.
Table 1: Univariate summary statistics for observation vector x = [x1 , x2 , ..., xn ]
Feature
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Description
length of time series
average daily risk
linear weighted average daily risk
quadratic weighted average daily risk
risk on most recently observed day
standard deviation of daily risk
average absolute change in daily risk
average absolute change in 1st difference
fraction of the visit with positive risk score
fraction of the visit with negative risk score
sum of the risk over the most recent 3 days
longest positive run (normalized)
longest negative run (normalized)
maximum observation
location of maximum (normalized)
minimum observation
location of minimum (normalized)
n, P
n
i,
1 xP
n
2
i,
1 ixP
n(n+1)
n 2
6
1 i xi ,
n(n+1)(2n+1)
1
n
xn ,
?,
Pn?1
1
|xi ? xi+1 |,
n P1
n?2
1
|x0i ? x0i+1 |,
n P1
n
1
1
,
n P1 xi >0
n
1
1 1xi <0 ,
n
P
n
n?2 xi ,
max xi ,
i
1
argmax
n
i
xi ,
min xi ,
i
1
argmin
n
i
xi ,
Features 2-4 are averages; Features 3 and 4 weight days closer to the time of classification more
heavily. Features 6-10 are different measures for the amount of fluctuation in the time series. Features 5 and 11 capture information about the most recent states of the patient. Features 12 and
13 identify runs in the data, i.e., periods of time where the patient is consistently at high or low
risk. Finally, Features 14-17 summarize information regarding global maxima and minima in the
approximate risk process.
Given these feature definitions, we map each patient admission risk process to a fixed-length feature
vector. These summarization variables allow one to compare time series of different lengths, while
still capturing temporal information, e.g., when the maximum risk occurs relative to the time of
prediction. Given this feature space, one can learn a classifier to identify high-risk patients. This
approach is described in Figure 2.
...
pn
SVM1
x1
x2 x
...
p1
p2
Feature Extraction
x?
SVM2
y
xn
3
1
2
Given m?n admission P, where m is
the number of observ. for each day
and n is the number of days, predict
daily risk xi based on the observ.
pi, for i=1?n.
Concatenate predictions
and extract feature
vector x? based on time
series x.
Classify each
admission based on
the x?; predict whether
or not P
?will test
positive for C. diff.
Figure 2: A two-step approach to risk stratification where predefined features are extracted from the
time-series data.
4
3.2.2
Classification using Similarity Metrics
In the previous section, we learned a second classifier based on extracted features. In this section,
we consider classifiers based on the raw data, i.e., the concatenated time series from Step 2 in Figure 2. SVMs classify examples based on a kernel or similarity measure. One of the most common
non-linear kernels is the Gaussian radial basis function kernel: k(xi , xj ) = exp(??kxi ? xj k2 ).
Its output is dependent on the Euclidean distance between examples xi and xj . This distance measure requires vectors of the same length. We consider two approaches to generating vectors of the
same length: (1) linear interpolation and (2) truncation. In the first approach we linearly interpolate between points. In the second approach we consider only the most recent 5 days of data,
xn?4 , xn?3 , ..., xn .
Euclidean distance is a one-to-one comparison. In contrast, the dynamic time warping (DTW) distance is a one-to-many comparison [14]. DTW computes the distance between two time series by
finding the minimal cost alignment. Here, the cost is the absolute distance between aligned points.
We linearly interpolate all time series to have the same length, the length of the longest admission
within the dataset (54). To ensure that the warping path does not contain lengthy vertical and horizontal segments, we constrain the warping window (how far the warping path can stray from the
diagonal) using the Sakoe-Chiba band with a width of 10% of the length of the time series [15]. We
learn an SVM classifier based on this distance metric, by replacing the Euclidean distance in the
RBF kernel with the DTW distance, k(xi , xj ) = exp(??DT W (xi , xj )) as in [16].
3.2.3
Classification using Hidden Markov Models
We can make observations about a patient on a daily basis, but we cannot directly measure whether
or not a patient is at high risk. Hence, we used the phrase approximate risk process. By applying
HMMs we assume there is a sequence of hidden states, x1 , x2 , ..., xn that govern the observations
y1 , y2 , ..., yn . Here, the observations are the predictions made by the SVM. We consider a twostate HMM where each state, s1 and s2 , is associated with a mixture of Gaussian distributions over
possible observations. At an intuitive level, one can think of these states as representing low and
high risk. Using the data, we learn and apply HMMs in two different ways.
Classification via Likelihood
We hypothesize that there may exist patterns of risk over time that are more likely to lead to a positive test result. To test this hypothesis, we first consider the classic approach to classification using
HMMs described in Section VI-B [17]. We learn two separate HMMs: one using only observation sequences from positive patients and another using only observation sequences from negative
patients. We initialize the emission probabilities differently for each model based on the data, but
initialize the transition probabilities as uniform probabilities. Given a test observation sequence, we
apply both models and calculate the log-likelihood of the data given each model using the forwardbackward algorithm. We classify patients continuously, based on the ratio of the log-likelihoods.
Classification via Posterior State Probabilities
As we saw in Figure 1, the SVM output for a patient may fluctuate greatly from day to day. While
large fluctuations in risk are not impossible, they are not common. Recall that in our initial calculation while the variables from time of admission are included in the prediction, the previous day?s
risk is not. The predictions produced by the SVM are independent. HMMs allow us to model the
observations as a sequence and induce a temporal dependence in the model: the current state, xt ,
depends on the previous state, xt?1 .
We learn an HMM on a training set. We consider a two state model in which we initialize the
emission probabilities as p(yt |xt = s1 ) = N (?s1 , 1), p(yt |xt = s2 ) = N (?s2 , 1) ? t where ?s1 =
?1 and ?s2 = 1. Based on this initialization s1 and s2 correspond to ?low-risk? and ?high-risk?
states, as mentioned above. A key decision was to use a left-to-right model where, once a patient
reaches a ?high-risk? state they remain there. All remaining transition probabilities were initialized
uniformly. Applied to a test example we compute the posterior probabilities p(xt |y1 , ..., yn ) for
t = 1...n using the forward-backward algorithm. Because of the left-to-right assumption, if enough
high-risk observations are made it will trigger a transition to the high-risk state. Figure 3 shows two
examples of risk processes and their associated posterior state probabilities p(xt = s2 |y1 , ..., yn ) for
5
1
1
0
y
y
0.5
?0.5
0
2
4
6
8
10
12
14
16
18
20
1
p(x=s2|y1,...,yn)
p(x=s2|y1,...,yn)
?1
0
?1
0.5
0
0
5
10
15
20
Time (days)!
in Days
Time
0
5
0
0
5
10
15
20
25
10
15
20
25
1
0.5
Time (days)!
in Days
Time
(a) Patient is discharged on day 40
(b) Patient tests positive for C. diff on day 24
Figure 3: Given all of the observations from y1 , ..., yn (in blue) we compute the posterior probability
of being in a high-risk state for each day (in red).
t = 1...n. We classify each patient according to the probability of being in a high-risk state on the
most recent day i.e., p(xn = s2 |y1 , ...yn ).
4
Experiments & Results
This section describes a set of experiments used to compare several methods for predicting a patient?s risk of acquiring C. diff during the current hospital admission. We start by describing the
experimental setup, which is maintained across all experiments, and later present the results.
4.1
Experimental Setup
In order to reduce the possibility of confusing the risk of becoming colonized with C. diff with the
existence of a current infection, for patients from the positive class we consider only data collected
up to two days before a positive test result. This reduces the possibility of learning a classifier based
on symptoms or treatment (a problem with some earlier studies).
For patients who never test positive, researchers typically use the discharge day as the index event
[3]. However, this can lead to deceptively good results because patients nearing discharge are typically healthier than patients not nearing discharge. To avoid this problem, we define the index event
for negative examples as either the halfway point of their admission, or 5 days into the admission,
whichever is greater. We consider a minimum of 5 days for a negative patient since 5 days is the
minimum amount of data we have for any positive patient (e.g., a patient who tests positive on day
7).
To handle class imbalance, we randomly subsample the negative class, selecting 10 negative examples for each positive example. When training the SVM we employ asymmetric cost parameters as
in [18]. Additionally, we remove outliers, those patients with admissions longer than 60 days. Next,
we randomly split the data into stratified training and test sets with a 70/30 split. The training set
consisted of 1,251 admissions (127 positive), while the test set was composed of 532 admissions (50
positive). This split was maintained across all experiments. In all of the experiments, the training
data was used for training purposes and validation of parameter selection, and the test set was used
for evaluation purposes. For training and classification, we employed SVMlight [19] and Kevin
Murphy?s HMM Toolbox [20].
4.2
Results
Table 2 compares the performance of eight different classifiers applied to the held-out test data.
The first classifier is our baseline approach, described in Section 3.1, it classifies patients based
solely on their current state. The second classifier RP+Average is an initial improvement on this
approach, and classifies patients based on the average value of their risk process. The remaining classifiers are all based on time-series classification methods. RP+SimilarityEuc.5days classifies patients using a non-linear SVM based on the Euclidean distance between the most recent
6
Table 2: Predicting a positive test result two days in advance using different classifiers. Current
State represents the traditional approach to risk stratification, and is the only classifier that is not
based on patient Risk Processes (RP).
Approach
AUC
95% CI
F-Score
95% CI
Current State
RP+Average
RP+SimilarityEuc.5days
RP+HMMlikelihood
RP+SimilarityEuc.interp.
RP+SimilarityDT W
RP+HMMposterior
RP+Features
0.69
0.75
0.73
0.74
0.75
0.76
0.76
0.79
0.61-0.77
0.69-0.81
0.67-0.80
0.68-0.81
0.69-0.82
0.69-0.82
0.70-0.82
0.73-0.85
0.28
0.32
0.27
0.30
0.31
0.31
0.30
0.37
0.19-0.38
0.21-0.41
0.18-0.37
0.20-0.38
0.22-0.41
0.22-0.41
0.21-0.41
0.24-0.49
0.2
1
0.15
0.1
Weight
TPR (Sensivity)
0.8
0.6
0.05
0
0.4
?0.05
0.2
?0.15
?0.1
RP + Features!
Risk Process
Features AUC: 0.7906
0
0
0.2
0.4
0.6
0.8
?0.2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Feature
1
FPR (1?Specificity)
Figure 4: Results of predicting a patient?s risk of
testing positive for C. diff in the held-out test set
using RP+Features.
Figure 5: Feature weights from SVMs learned using different folds of the training set. The definition of features is given in Table 1
5 days. RP+SimilarityEuc.interp. uses the entire risk process by interpolating between points.
These two methods in addition to DTW are described in Section 3.2.2. The difference between
RP+HMMlikelihood and RP+HMMposterior is described in Section 3.2.3. RP+Features classifies
patients based on a linear combination of the average and other summary statistics (described in
Section 3.2.1) of the risk process. For all of the performance measures we compute 95% point wise
confidence intervals by bootstrapping (sampling with replacement) the held-out test set.
Figure 4 gives the ROC curve for the best method, the RP+Features. The AUC is calculated by
sweeping the decision threshold. The RP+Features performed as well or better than the Current
State and RP+Average approach at every point along the curve, thereby dominating both traditional
approaches.
Compared to the other classifiers the classifier based on the RP+Features dominates on both AUC
and F-Score. This classifier is based on a linear combination of statistics (listed in Table 1) computed
from the patient risk processes. We learned the feature weights using the training data. To get a sense
of the importance of each feature we used repeated sub-sampling validation on the training set. We
randomly subsampled 70% of the training data 100 times and learned 100 different SVMs; this
resulted in 100 different sets of feature weights. The results of this experiment are shown in Figure
5. The most important features are the length of the time series (Feature 1), the fraction of the time
for which the patient is at positive risk (Feature 9), and the maximum risk attained (Feature 14).
The only two features with significantly negative weights are Feature 10 and Feature 13, the overall
fraction of time a patient has a negative risk, and the longest consecutive period of time that a patient
has negative risk.
It is difficult to interpret the performance of a classifier based on these results alone, especially since
the classes are imbalanced. Figure 6 gives the confusion matrix for mean performance of the best
7
classifier, RP+Features. To further convey the ability of the classifier to risk stratify patients, we split
the test patients into quintiles (as is often done in clinical studies) based on the continuous output
of the classifier. Each quintile contains approximately 106 patients. For each quintile we calculated
the probability of a positive test result, based on those patients who eventually test positive for C.
diff. Figure 7 shows that the probability increases with each quintile. The difference between the 1st
and 5th quintiles is striking; relative to the 1st quintile, patients in the 5th quintile are at more than a
25-fold greater risk.
Predicted Outcome
n
TP:26
FN:24
Fraction of Patients who test Positive
0.3
p
0
p
Actual
Outcome
n0
FP:72
TN:410
0.25
0.2
0.15
0.1
0.05
0
1st
2nd
3rd
4th
5th
Quintile
Figure 6: Confusion Matrix Using the best approach, the RP+Features, we achieve a Sensitivity
of 50% and a Specificity of 85% on the held-out
data.
5
Figure 7: Test patients with RP+Features predictions in the 5th quintile are more than 25 times
more likely to test positive for C. diff than those
in the 1st quintile.
Discussion & Conclusion
To the best of our knowledge, we are the first to consider risk of acquiring an infection as a time
series. We use a two-stage process, first extracting approximate risk processes and then using the
risk process as an input to a classifier. We explore three different approaches to classification:
similarity metrics, feature vectors, and hidden Markov models. The majority of the methods based
on time-series classification performed as well if not better than the previous approach of classifying
patients simply based on the average of their risk process. The differences were not statistically
significant, perhaps because of the small number of positive examples in the held-out set. Still,
we are encouraged by these results, which suggest that posing the risk stratification problem as a
time-series classification task can provide more accurate models.
There is large overlap in the confidence intervals for many of the results reported in Table 2, in part
because of the paucity of positive examples. Still, based on the mean performance, all classifiers that
incorporate patient risk processes outperform the Current State classifier, and the majority of those
classifiers perform as well or better than the RP+Average. Only two classifiers did not perform better
than the latter classifier: RP+SimilarityEuc.5days and RP+HMMlikelihood . RP+SimilarityEuc.5days
classifies patients based on a similarity metric using only the most recent 5 days of the patient risk
processes. Its relatively poor performance suggests that a patient?s risk may depend on the entire risk
process. The reasons for the relatively poor performance of the RP+HMMlikelihood approach are
less clear. Initially, we thought that perhaps two states was insufficient, but experiments with larger
numbers of states led to overfitting on the training data. It may well be that the Markovian assumption is problematic in this context. We plan to investigate other graphical models, e.g., conditional
random fields, going forward.
The F-Scores reported in Table 2 are lower than often seen in the machine-learning literature. However, when predicting outcomes in medicine, the problems are often so hard, the data so noisy, and
the class imbalance so great that one cannot expect to achieve the kind of classification performance
typically reported in the machine-learning literature. For this reason, the medical literature on risk
stratification typically focuses on a combination of the AUC and the kind of odds ratios derivable
from the data in Figure 7. As observed in the introduction, a direct comparison with the AUC
achieved by others is not possible because of differences in the datasets, the inclusion criteria, and
the details of the task. We have yet to thoroughly investigate the clinical ramifications of this work.
However, for the daunting task of risk stratifying patients already at an elevated risk for C. diff, an
AUC of 0.79 and an odds ratio of >25 are quite good.
8
References
[1] M. M. Gaber, A. Zaslavsky, and S. Krishnaswamy. Mining data streams: A review. SIGMOD,
34(2), June 2005.
[2] Z. Xing, J. Pei, and E. Keogh. A brief survey on sequence classification. ACM SIGKDD
Explorations, 12(1):40?48, June 2010.
[3] E. R. Dubberke, K. A. Reske, Y. Yan, M. A. Olsen, L. C. McDonald, and V. J. Fraser. Clostridium difficile - associated disease in a setting of endemicity: Identification of novel risk factors.
Clinical Infectious Diseases, 45:1543?9, December 2007.
[4] CDC. Rates for clostridium difficile infection among hospitalized patients. Centers for Disease
Control and Prevention Morbidity and Mortality Weekly Report, 60(34):1171, 2011.
[5] D. A. Katz, M.E. Lynch, and B. Littenber. Clinical prediction rules to optimize cytotoxin
testing for clostridium difficile in hospitalized patients with diarrhea. American Journal of
Medicine, 100(5):487?95, 1996.
[6] J. Tanner, D. Khan, D. Anthony, and J. Paton. Waterlow score to predict patietns at risk of
developing clostridium difficile-associated disease. Journal of Hospital Infection, 71(3):239?
244, 2009.
[7] E. R. Dubberke, Y. Yan, K. A. Reske, A.M. Butler, J. Doherty, V. Pham, and V.J. Fraser.
Development and validation of a clostridium difficile infection risk prediction model. Infect
Control Hosp Epidemiol, 32(4):360?366, 2011.
[8] K. W. Garey, T. K. Dao-Tran, Z. D. Jiang, M. P. Price, L. O. Gentry, and DuPont H. L. A
clinical risk index for clostridium difficile infection in hospitalized patients receiving broadspectrum antibiotics. Journal of Hospital Infections, 70(2):142?147, 2008.
[9] G. Krapohl. Preventing health care-associated infection: Development of a clinical prediction
rule for clostridium difficile infection. PhD Thesis, 2011.
[10] N. Peled, S. Pitlik, Z. Samra, A. Kazakov, Y. Bloch, and J. Bishara. Predicting clostridium
difficile toxin in hospitalized patients with antibiotic-associated diarrhea. Infect Control Hosp
Epidemiol, 28(4):377?81, 2007.
[11] J. Wiens, E. Horvitz, and J. Guttag. Learning evolving patient risk processes for c. diff colonization. In ICML Workshop on Machine Learning from Clinical Data, 2012.
[12] T. W. Liao. Clustering of time series data - a survey. The Journal of the Pattern Recognition
Society, January 2005.
[13] P. Bennett, S. Dumais, and E. Horvitz. The combination of test classifiers using reliability
indicators. Information Retrieval, 8(1):67?100, 2005.
[14] H. Sakoe and S. Chiba. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(1):43?49, 1978.
[15] C. Ratanamahatana and E. Keogh. Three myths about dynamic time warping data mining. In
Proceedings of the Fifth SIAM International Conference on Data Mining, 2005.
[16] C. Bahlmann, B. Haasdonk, and Burkhardt H. On-line handwriting recognition with support
vector machines - a kernel approach. Proceedings of the 8th International Workshop on Frontiers in Handwriting Recognition, 2002.
[17] L.R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), February 1989.
[18] K. Morik, P. Brockhausen, and T. Joachims. Combining statistical learning with a knowledgebased approach - a case study in intensive care monitoring. Proc. 16th International Conference on Machine Learning, 1999.
[19] T. Joachims. Making large-scale svm learning practical. advances in kernel methods - support
vector learning, 1999.
[20] K. Murphy. Hidden Markov Model (HMM) Toolbox for Matlab.
www.cs.ubc.ca/?murphyk/Software/HMM/hmm.html.
9
| 4525 |@word nd:1 thereby:1 initial:2 series:38 score:7 selecting:1 contains:1 horvitz:4 outperforms:1 existing:1 current:16 com:1 incidence:1 yet:2 must:1 john:1 fn:1 concatenate:1 dupont:1 hypothesize:2 remove:3 exploded:1 n0:1 alone:1 half:1 selected:1 twostate:1 fpr:1 short:1 complication:1 location:3 hospitalized:6 admission:30 along:1 direct:3 become:1 incorrect:1 sakoe:2 acquired:2 p1:4 inspired:1 actual:1 window:1 considering:2 becomes:3 begin:2 estimating:1 underlying:1 classifies:5 argmin:1 kind:2 spoken:1 finding:1 bootstrapping:1 temporal:9 every:1 finance:1 weekly:1 classifier:29 k2:1 control:3 murphyk:1 medical:2 yn:7 positive:37 before:2 stratifying:1 timing:1 despite:2 jiang:1 gentry:1 solely:2 becoming:3 approximately:2 fluctuation:2 interpolation:1 chose:1 path:2 initialization:1 ecg:1 suggests:1 hmms:6 stratified:1 statistically:1 unique:1 practical:1 testing:4 differs:1 area:3 evolving:2 yan:2 significantly:2 weather:1 thought:1 confidence:2 radial:1 induce:1 specificity:2 word:1 suggest:1 get:1 cannot:2 pertain:1 selection:1 risk:126 applying:2 impossible:1 context:1 optimize:1 measurable:2 map:1 www:1 yt:2 center:1 exposure:1 duration:1 independently:1 survey:2 identifying:4 rule:3 deceptively:1 population:4 handle:2 classic:1 variation:1 discharge:4 trigger:1 heavily:1 programming:1 us:1 hypothesis:1 recognition:5 continues:1 asymmetric:1 database:1 observed:2 haasdonk:1 capture:1 calculate:1 highest:1 forwardbackward:1 toxin:1 disease:5 mentioned:2 intuition:1 govern:1 peled:1 dynamic:3 ultimately:1 motivate:1 depend:2 segment:1 eric:1 basis:3 easily:1 stock:1 differently:1 represented:2 various:1 paton:1 describe:1 aggregate:1 kevin:1 outcome:7 quintiles:2 quite:1 richer:1 larger:1 dominating:1 otherwise:1 ability:1 statistic:3 think:1 itself:1 noisy:1 final:1 sequence:7 propose:3 tran:1 relevant:1 aligned:1 combining:1 ramification:2 achieve:3 infectious:1 description:1 intuitive:1 knowledgebased:1 produce:1 generating:1 x0i:2 p2:1 implemented:1 predicted:1 c:1 come:1 correct:1 exploration:1 assign:1 keogh:2 frontier:1 pham:1 considered:1 ground:1 exp:2 great:2 predict:3 vary:1 early:1 achieves:1 consecutive:1 purpose:2 proc:1 label:6 saw:1 weighted:2 hope:1 mit:2 gaussian:2 lynch:1 aim:2 pn:2 avoid:1 fluctuate:1 emission:2 focus:1 joachim:2 june:2 improvement:2 longest:4 consistently:1 likelihood:3 greatly:2 contrast:2 sigkdd:1 medication:1 baseline:1 sense:1 dependent:1 entire:3 typically:4 initially:1 hidden:6 going:1 overall:2 classification:28 among:1 html:1 prevention:1 plan:1 development:2 initialize:3 field:2 once:3 bacterial:1 extraction:3 never:1 sampling:2 encouraged:1 stratification:13 identical:1 svm2:1 represents:1 look:1 icml:1 dao:1 others:1 report:1 serious:1 employ:2 inherent:1 endemicity:1 randomly:3 composed:1 resulted:1 interpolate:2 murphy:2 subsampled:1 argmax:1 replacement:1 microsoft:1 inpatient:2 mining:3 possibility:3 investigate:2 evaluation:2 severe:1 alignment:1 mixture:1 devoted:1 held:7 bloch:1 predefined:1 accurate:1 closer:1 daily:14 euclidean:4 initialized:1 minimal:1 increased:1 classify:5 soft:1 modeling:1 markovian:1 earlier:1 infected:2 tp:1 phrase:1 cost:3 deviation:1 hundred:1 uniform:1 reported:7 kxi:1 quintile:8 thoroughly:1 st:5 dumais:1 international:3 sensitivity:1 siam:1 stay:5 receiving:1 tanner:1 continuously:1 thesis:1 mortality:1 nearing:2 american:1 potential:1 wiens:2 vi:1 depends:1 stream:1 later:2 performed:3 lab:1 doing:2 red:1 start:2 xing:1 characteristic:2 largely:1 who:8 correspond:1 identify:5 discharged:2 rabiner:1 raw:1 identification:1 produced:1 monitoring:1 researcher:2 diarrhea:3 history:1 influenced:1 reach:1 infection:12 lengthy:1 definition:4 acquisition:1 frequency:1 involved:1 garey:1 associated:9 handwriting:2 therapeutic:1 dataset:2 treatment:1 recall:1 knowledge:2 dimensionality:1 actually:1 focusing:1 attained:1 dt:1 day:46 daunting:1 done:1 though:1 symptom:2 stage:6 myth:1 until:2 horizontal:1 replacing:1 gaber:1 perhaps:2 interp:2 consisted:2 normalized:4 contain:1 y2:1 evolution:3 hence:2 during:4 width:1 zaslavsky:1 auc:11 maintained:2 criterion:1 mcdonald:1 confusion:2 tn:1 doherty:1 temperature:1 wise:1 novel:2 recently:1 common:2 discussed:1 elevated:2 occurred:1 tpr:1 interpret:1 katz:1 significant:3 measurement:1 refer:2 rd:1 inclusion:2 had:2 reliability:2 svm1:1 similarity:6 operating:2 longer:1 etc:1 krishnaswamy:1 posterior:4 own:1 recent:7 imbalanced:1 binary:1 success:1 seen:1 minimum:5 greater:2 care:2 preceding:1 employed:1 period:5 signal:4 reduces:1 calculation:1 clinical:8 long:1 retrieval:2 visit:2 fraser:2 prediction:14 liao:1 patient:108 metric:5 represent:1 kernel:6 achieved:1 addition:2 interval:2 aged:1 morbidity:1 recording:1 december:1 leveraging:1 odds:2 extracting:5 presence:1 svmlight:1 vital:1 easy:1 enough:1 split:4 variety:1 xj:5 misclassifications:1 reduce:2 regarding:2 intensive:1 whether:5 utility:1 stratify:2 speech:3 cause:1 matlab:1 useful:1 clear:1 listed:1 burkhardt:1 transforms:1 amount:2 band:1 svms:3 antibiotic:2 outperform:1 exist:1 problematic:1 tutorial:1 sign:1 diagnostic:1 extrinsic:1 blue:1 affected:1 group:2 key:1 threshold:1 achieving:1 backward:1 fraction:5 year:3 sum:1 halfway:1 run:3 striking:1 almost:2 reader:2 throughout:1 home:1 decision:3 confusing:1 capturing:1 fold:2 quadratic:1 activity:1 constrain:1 x2:3 software:1 aspect:2 fourier:1 min:1 relatively:3 developing:1 according:2 combination:6 poor:2 remain:3 describes:1 across:2 evolves:1 making:1 s1:5 intuitively:1 outlier:1 previously:1 describing:1 eventually:3 know:1 whichever:1 end:2 demographic:1 studying:1 available:1 apply:4 eight:1 rp:28 existence:1 hosp:2 remaining:4 ensure:2 clustering:1 graphical:1 medicine:3 calculating:1 paucity:1 sigmod:1 concatenated:2 especially:1 february:1 society:1 warping:5 already:2 added:1 occurs:1 dependence:1 traditional:3 diagonal:1 distance:11 separate:1 hmm:6 majority:2 collected:3 reason:3 guttag:3 length:9 index:3 morik:1 insufficient:1 ratio:3 difficult:3 setup:2 negative:12 guideline:2 summarization:1 pei:1 perform:2 imbalance:2 vertical:1 observation:14 markov:5 datasets:1 january:1 defining:1 precise:1 frame:1 y1:7 sweeping:1 toolbox:2 khan:1 acoustic:1 learned:4 established:1 pattern:7 fp:1 summarize:1 including:3 max:1 event:3 overlap:1 difficulty:1 predicting:8 indicator:2 representing:2 brief:1 dtw:4 categorical:1 extract:4 health:1 text:1 review:2 understanding:1 literature:7 relative:2 expect:2 cdc:1 age:1 validation:3 agent:1 xp:1 classifying:4 pi:1 course:3 summary:2 truncation:1 allow:2 absolute:3 fifth:1 curve:4 calculated:2 dimension:1 boundary:1 depth:1 xn:8 chiba:2 computes:1 ignores:2 transition:3 made:4 forward:2 preventing:1 far:1 transaction:1 approximate:12 ignore:1 derivable:1 olsen:1 global:1 overfitting:1 receiver:2 predictably:1 xi:15 butler:1 continuous:3 wary:1 table:8 additionally:2 ratanamahatana:1 nature:1 learn:7 composing:1 ca:1 posing:1 investigated:1 interpolating:1 anthony:1 domain:1 did:1 observ:2 linearly:2 s2:9 noise:2 subsample:1 repeated:1 convey:1 x1:3 roc:1 sub:1 stray:1 explicit:1 concatenating:1 wavelet:1 infect:2 specific:1 xt:6 explored:1 list:1 svm:12 dominates:1 intrinsic:1 workshop:2 importance:1 ci:4 phd:1 margin:1 forecast:1 led:1 simply:1 explore:4 likely:5 univariate:1 labor:1 pathophysiology:1 nursing:1 acquiring:4 ubc:1 truth:1 extracted:6 acm:1 brockhausen:1 conditional:1 goal:1 rbf:1 room:1 price:2 bennett:1 adverse:3 change:3 included:1 specifically:1 hard:1 diff:27 uniformly:1 averaging:1 hospital:17 experimental:2 meaningful:1 mark:1 support:3 latter:1 investigator:1 incorporate:1 audio:1 tested:3 |
3,895 | 4,526 | Deep Spatio-Temporal Architectures and Learning
for Protein Structure Prediction
Pietro Di Lena, Ken Nagata, Pierre Baldi
Department of Computer Science, Institute for Genomics and Bioinformatics
University of California, Irvine
{pdilena,knagata,pfbaldi}@[ics.]uci.edu
Abstract
Residue-residue contact prediction is a fundamental problem in protein structure
prediction. Hower, despite considerable research efforts, contact prediction methods are still largely unreliable. Here we introduce a novel deep machine-learning
architecture which consists of a multidimensional stack of learning modules. For
contact prediction, the idea is implemented as a three-dimensional stack of Neural Networks NNkij , where i and j index the spatial coordinates of the contact
map and k indexes ?time?. The temporal dimension is introduced to capture the
fact that protein folding is not an instantaneous process, but rather a progressive
refinement. Networks at level k in the stack can be trained in supervised fashion to refine the predictions produced by the previous level, hence addressing the
problem of vanishing gradients, typical of deep architectures. Increased accuracy
and generalization capabilities of this approach are established by rigorous comparison with other classical machine learning approaches for contact prediction.
The deep approach leads to an accuracy for difficult long-range contacts of about
30%, roughly 10% above the state-of-the-art. Many variations in the architectures
and the training algorithms are possible, leaving room for further improvements.
Furthermore, the approach is applicable to other problems with strong underlying
spatial and temporal components.
1
Introduction
Protein structure prediction from amino acidic sequence is one of the grand challenges in Bioinformatics and Computational Biology. To date, the more accurate and reliable computational methods
for protein structure prediction are based on homology modeling [27]. Homology-based methods
use similarity to model the unknown target structure using known template structures. However,
when good templates do not exist in protein structure repositories or when sequence similarity is
very poor?which is often the case?homology modeling is no more effective. This is the realm of ab
initio modeling methods, which attempt to recover three-dimensional protein models more or less
from scratch. Because the structure of proteins is invariant under translations and rotations, it is
useful to consider structural representations that do not depend on cartesian coordinates. One such
representation is the contact map, essentially a sparse binary matrix representing which amino acids
are in contact in the 3D structure. While contact map prediction can be viewed as a sub-problem
in protein structure prediction, it is well known that it is essentially equivalent to protein structure
predictions since 3D structures can be completely recovered from sufficiently large subsets of true
contacts [20, 26, 23]. Furthermore, even small sets of correctly predicted contacts can be useful
for improving ab initio methods [25]. In short, contact map prediction plays a fundamental role
in protein structure prediction and most of the state-of-the art contact predictors use some form of
machine learning. Contact prediction is assessed every two years in the CASP experiments [9, 15].
However, despite considerable efforts, the accuracy of the best predictors at CASP rarely exceeds
1
20% for long-range contacts, suggesting major room for improvements. Simulations suggest that
this accuracy ought to be increased to about 35% in order to be able to recover good 3D structures.
There are two main issues arising in contact prediction that have not been addressed systematically:
(1) Residue contacts are not randomly distributed in native protein structures, rather they are spatially
correlated. Current contact predictors generally do not take into account these correlations, not
even at the local level, since the contact probability for a residue pair is typically learned/inferred
independently of the contact probabilities in the neighborhood of the pair. (2) Proteins do not assume
a 3D conformation instantaneously, but rather through a dynamic folding process that progressively
refines the structure. In contrast, current machine learning approaches attempt to learn contact map
probabilities in a single step. To address these issues, here we introduce a new machine-learning
deep architecture, designed as a deep stack of neural networks, in such a way that each level in
the stack receives in input and refines the predictions produced at the previous level. Each level
can be trained in a fully supervised fashion on the same set of target contacts/non-contacts, thus
overcoming the gradient vanishing problem, typical of deep architectures. The idea of layering
learning modules, such that the outputs of previous layers are fed in input to the next layers, is
not completely new and it has been applied in different contexts, particularly to computer vision
detection problems [4, 10, 12, 22]. However the techniques developed in visual detection cannot
be directly applied to contact prediction due to the intrinsic difference of such problems: protein
sequences have different lengths, thus it is not possible to process the entire sequence at once in the
network input, as it is done for images. The present work represents, to our knowledge, the first
attempt to introduce spatial correlation in protein contact prediction.
2
2.1
Data preparation
Contact definition and evaluation criteria
We define two residues to be in contact if the Euclidean distance between their C? atoms (C?
? This is the contact definition adopted for the contact prediction
for Glycines) is lower than 8A.
assessment in CASP experiments [15]. The protein map of contact (or contact map) provides a
two-dimensional translation and rotation invariant representation of the protein three-dimensional
structure. The information content of the contact map is not uniform within different regions of the
map. Three distinct classes of contacts can be defined, depending on the linear sequence separation
between the residues: (1) long-range contacts, with separation ? 24 residues; (2) medium-range
contacts, with separation between 12 and 23 residues; and (3) short-range contacts, with separation
between 6 and 11 residues. Contacts between residues separated by less than 6 residues are dense
and can be easily predicted from the secondary structure. Conversely, the sparse long-range contacts
are the most informative and also the most difficult to predict. Thus, as in the CASP experiments, we
focus primarily on long-range contact prediction for performance assessment. The contact prediction performance is evaluated using the standard accuracy measure [15]: Acc = TP/(TP+FP), where
TP and FP are the true positive and false positive predicted contacts, respectively. The Acc measure
is computed for the sets of L/5, L/10 and 5 top scored predicted pairs, where L is the length of
the domain sequence. The most widely accepted measure of performance for contact prediction
assessment is Acc for L/5 pairs and sequence separation ? 24 [15].
2.2
Training and test sets
In order to asses the performance of our method, a training and a test set of protein domains are
derived from the ASTRAL database [6]. We extract from the ASTRAL release 1.73 the precompiled
set of protein domains with less than 20% pairwise sequence identity. We select only the domains
belonging to the main SCOP [17] classes (All-Alpha, All-Beta, Alpha/Beta and Alpha+Beta). We
exclude domains of length less than 50 residues, domains with multiple 3D structures, as well as
non-contiguous domains (including those with missing backbone atoms). We further filter this list
by selecting just one representative domain?the shortest one?per SCOP family. This yields a training
set of 2,191 structures (the list of protein domains can be found as supplementary material of [8]).
For performance assessment purposes, this set is partitioned into 10 disjoint groups of roughly the
same size and average domain lengths, so that no domains from two distinct groups belong to the
same SCOP fold. As a result, the 10 sets do not share any structural or sequence similarity, providing
2
a high-quality benchmark for ab initio prediction. Model performance is assessed using a standard
10-fold cross-validation procedure. In all our tests, the accuracy results on training/test are averaged
over the 10 cross-validation experiments.
2.3
Feature and training example selection
In this work, we do not attempt to determine the best static input features for contact prediction.
Rather, we focus on a minimal set of features commonly used in machine learning-based contact
prediction [11, 2, 21, 7, 24, 5]. Each residue in the protein sequence is described by a feature vector
encoding three sources of information (for a total of 25 values): evolutionary information in the form
of profiles (20 values, one for each amino acid type), predicted secondary structure (3 binary values,
?-sheet or ?-helix or coil), and predicted solvent accessibility (2 binary values, buried or exposed).
The profiles are computed using PSI-BLAST [1] with an E-value cutoff equal to 0.001 and up to
ten iterations against the non redundant protein sequence database (NR). The secondary structure is
predicted with SSPRO [18] and the solvent accessibility with ACCPRO [19]. For a pair of residues,
these features are included in the network input by using a 9-residue long sliding window centered
at each residue in the pair. In our Deep NN, these features represent the spatial features (Section 3).
The uneven distribution of positive (residue pairs in contact) and negative (residue pairs not in contact) examples in native protein structures requires some rebalancing of the training data. For each
training structure we randomly select 20% of the negative examples, while keeping all the positive
examples. We do not include in our set of selected examples residue pairs with sequence separation
less than 6. All the methods compared in Section 4 are trained on exactly the same sets of examples.
3
Deep Spatio-Temporal Neural Network (DST-NN) architecture
In the specific implementation used in the simulations, the DST-NN architecture consists of a threedimensional stack of neural networks NNkij , where i and j are the usual spatial coordinates of the
contact map, and k is a ?temporal? index. All the neural networks in the stack have the same topology (same input, hidden, and output layer sizes) with a single hidden layer, and a single sigmoidal
output unit estimating the probability of contact between i and j at the level k (Figure 1(a) and 1(b)).
Furthermore, in this implementation, all the networks in the level k have the same weights (weight
sharing). Each level k can be trained in a fully supervised fashion, using the same contact maps
as targets. In this way, each level of the deep architecture represents a distinct contact predictor.
The inputs into NNkij can be separated into purely spatial inputs, and temporal inputs (which are not
purely temporal but include also a spatial component). For fixed i and j, the purely spatial inputs are
identical for all levels k in the stack, hence they do not depend on ?time?. These purely spatial inputs
include evolutionary profiles, predicted secondary structure, and solvent accessibility in a window
around residue i and residue j. These are the standard inputs used by most other predictors which
attempt to predict contacts in one shot and are described in more detail in Section 2.3. The temporal
inputs, on the other hand, are novel.
3.1
Temporal Features
The temporal inputs for NNkik correspond to the outputs of the networks NNk?1
at the previous
rs
level in the stack, where r and s range over a neighborhood of i and j. Here we use a neighborhood
of radius 4 centered at (i, j). The temporal features capture the idea that residue contacts are not
randomly distributed in native protein structures, rather they are spatially correlated: a contacting
residue pair is very likely to be in the proximity of a different pair of contacting residues. For
instance, a comparison of the contact proximity distribution (data not shown) for long-range residue
pairs in contact and not in contact shows that over 98% of the contacting residue pairs are in the
proximity of at least one additional contact, compared to 30% for non-contacting residue pairs,
within a neighborhood of radius 4. Although the contact predictions at a given level of the stack are
inaccurate, the contact probabilities included in the temporal feature vector can still provide some
rough estimation of the contact distribution in a given neighborhood.
Thus, in short, while our model is not necessarily meant to simulate the physical folding process,
the stack is used to organize the prediction in such a way that each level in the stack is meant to
refine the predictions produced by the previous levels, integrating information over both space and
3
Spatial features
residue features
Temporal features
coarse features
alignment features
receptive field
3?7?7
4?7?7
81
25?9?2
time. In particular, through the temporal inputs the architecture ought to be able to capture spatial
correlations between contacts, at least over some range.
....
Contact probability for i and j
k
Spatial features Output of NNi j
k+1
k+1
NNi j
NNi j
....
Spatial features for i and j
Spatial features Output of NN1i j
j
2
NNi j
i
Spatial features All zeros
Temporal features for i and j
Contact map predicted with
k
the networks NNi j
1
NNi j
(b) Temporal input features for NNkij
(a) DST-NN architecture
Figure 1: DST-NN architecture. (a) Overview. Each NNkij represents a feed-forward neural network
trainable by back-propagation. (b) For a pair of residues (i, j), the temporal inputs into NNk+1
conij
sist of the contact probabilities produced by the network at the previous level over a neighborhood
of (i, j).
3.2
Deep Learning
Training deep multi-layered neural networks is generally hard, since the error gradient tends to
vanish or explode with a high number of layers [16]. In contrast, in the proposed model, the learning
capabilities are not directly degraded by the depth of the stack, since each level of the stack can be
trained in a supervised fashion using true contact maps to provide the targets. In this way, training
can be performed incrementally, by adding a new layer to the stack. More precisely, the weights
of the first level network, NN1ij , are randomly initialized and the temporal feature vector is set to
0. The first network NN1ij is then trained for one epoch on the given set of examples. The weights
of NN1ij are then used to initialize the weights of NN2ij and the predictions obtained with NN1ij are
used to setup the temporal feature vector of NN2ij . The network NN2ij is then trained for one epoch
on the same set of examples used for NN1ij and this procedure is repeated up to a certain depth.
We have experimented with several variations of this training procedure, such as randomization of
the weights for each new network in the stack, training each network in the stack for more than one
epoch, growing the stack up to a maximum number of training epochs (one network for each epoch),
or growing it to a smaller depth but then repeating the training procedure through one or more
epochs. In Section 4.2 we discuss and compare such different training strategies. In Section 5 we
discuss some possible variants and generalizations of the full architecture. In any case, this approach
enables training very deep networks (e.g. with maximal values of k up to 100, corresponding to a
global neural network architecture with 300 layers).
4
4.1
Results
Performance comparison
Here we investigate the learning and generalization capabilities of the DST-NN model, and compare
it with plain three-layer Neural Network (NN) models, as well as 2D Recurrent Neural Network
(RNN) models, which are two of the most widely used machine learning approaches for contact
prediction [11, 2, 21, 24]. Here, the NN model is perfectly equivalent to the NNs implemented
in the DST-NN architecture, except for the temporal feature vector (which is missing in the NN
implementation). All three methods are trained with a standard on-line back-propagation procedure
using exactly the same set of examples and the same input features (Section 2.3).
One of the most typical problem in neural network design is related to the issue of choosing, for
a given classification problem, the most appropriate network size (i.e. typically the hidden layer
size, which affects the total number of connections in the network). The learning time and the
4
generalization capabilities of the particular neural network model are highly affected by the network
size parameter. In order to take into account the intrinsic incomparable capabilities of the different
DST-NN, NN, and RNN architectures, we perform our tests by considering a range of exponentially
increasing hidden layer sizes (4,8,16,32,64, and 128 units) for each architecture. The total number
of connection weights for each architecture in function of the hidden layer size, as well as the time
needed to perform one training epoch, are shown in Table 1.
Figure 2 shows the learning curves of the three methods as a function of the training epoch and
the different hidden layer sizes. We show the cross-training average accuracy on both training sets
(continuous line) and test sets (dotted line). The learning curves in Figure 2 show the generalization
performance with respect to the contact prediction accuracy on L/5 long range contacts; the accuracy
of prediction on long range contacts is the most widely accepted evaluation measure for contact
prediction and it provides a better estimate of the prediction performance than the training/testing
error. Since very large training epochs are infeasible in terms of time for the RNN model (see Table
1), for the aim of comparison, we trained each method for a maximum of 100 epochs. In Table 2
we summarize the prediction performance of the three machine learning methods by showing the
maximum average accuracy achieved in testing over 100 training epochs.
From Figure 2, the DST-NN has overall higher storage and generalization capacity than NN and
RNN. In particular, for hidden layer sizes larger than or equal to 8, the DST-NN performance are
superior to those of NN and RNN, regardless of their sizes. Moreover, note that hidden layer sizes
larger than 32 do not increase the generalization capabilities of any one of the three methods (Table 2). The counterintuitive learning curves of the RNN for hidden layer sizes larger than 8 can
be explained by considering the structure of the RNN architecture. The RNN model exploits a
recursive architecture that suffers, as general deep architectures, from the problem of gradient vanishing/explosion. In order to overcome this problem the authors of [2] use a modified form of
gradient descent, by which the delta-error for back-propagation is mapped into a piecewise linear
interval; this prevents the delta-error from becoming too small or too large. The boundaries of the
interval have been tuned for very small hidden layers (private communication). In our experiments,
we use the same boundaries for all the tested hidden layer sizes and, apparently, these proved to
be ineffective for hidden layer sizes larger than or equal to 16. In comparison, we remark again
that the DST-NN is unaffected by the gradient vanishing problem, even for very deep stacks. From
Figure 2, we notice that the DST-NN tends to overfit the training data more easily than the NN. For
instance, we notice some small overfitting for the DST-NN starting with hidden layer size 32, while
the NN starts to show some small overfitting only at hidden layer size 128. On the contrary, the
RNN does not show any sign of overfitting in 100 epochs of training, regardless the hidden layer
size in the tested range, and the performance in training is somewhat equivalent to the performance
in testing. As a final consideration, from Table 2, the NN and RNN best performance on L/5 long
range contacts reflect quite well the state-of-the-art in contact prediction [9, 15] with an accuracy
in the 21-23% range. In contrast, the DST-NN architecture achieves a maximum accuracy of %29
which represents a significant improvement over the state-of-the-art. As a visual example, Figure 3
shows the best predictions obtained by each method on a target domain in our data set. Despite the
three methods achieve exactly the same accuracy (0.6) on the top-scored L/5 long range contacts, it
is evident that the DST-NN provides an overall better prediction of the contact map topology.
Table 1: Connection weights and training times
HL size
4
8
16
32
64
128
DST-NN
#Conn
Time
NN
#Conn
Time
RNN
#Conn
Time
2,133
4,265
8,529
17,057
34,113
68,225
?6m
?10m
?15m
?26m
?1h20m
?2h
1,809
3,617
7,233
14,465
28,929
57,857
?1m
?3m
?5m
?8m
?15m
?28m
17,169
19,105
22,977
30,721
46,209
77,185
?1h30min
?2h
?2h40m
?3h20m
?4h50m
?7h
5
0.26
0.2
0.24
0.18
DST?NN train
NN train
RNN train
DST?NN test
NN test
RNN test
0.16
0.14
0.12
Accuracy
Accuracy
0.22
1
10
20
30
40
50
60
Epochs
70
80
90
0.22
0.2
0.18
0.16
100
1
10
(a) Hidden Layer size 4
20
30
40
50
60
Epochs
70
80
90
100
80
90
100
80
90
100
(b) Hidden Layer size 8
0.3
0.32
0.3
0.28
0.28
0.26
Accuracy
Accuracy
0.26
0.24
0.22
0.24
0.22
0.2
0.2
0.18
0.18
0.16
0.16
1
10
20
30
40
50
60
Epochs
70
80
90
100
1
10
(c) Hidden Layer size 16
20
30
40
50
60
Epochs
70
(d) Hidden Layer size 32
0.34
0.35
0.32
0.3
0.3
Accuracy
Accuracy
0.28
0.26
0.24
0.25
0.22
0.2
0.2
0.18
0.16
1
10
20
30
40
50
60
Epochs
70
80
90
0.15
100
1
(e) Hidden Layer size 64
10
20
30
40
50
60
Epochs
70
(f) Hidden Layer size 128
Figure 2: Learning curves of different machine learning methods
Table 2: Best prediction performance
HL size
4
8
16
32
64
128
4.2
L/5
DST-NN
L/10
Best5
L/5
NN
L/10
Best5
L/5
RNN
L/10
Best5
0.21
0.25
0.27
0.29
0.29
0.29
0.23
0.27
0.30
0.32
0.33
0.33
0.26
0.29
0.33
0.35
0.37
0.36
0.21
0.21
0.23
0.23
0.23
0.23
0.24
0.24
0.26
0.26
0.25
0.25
0.27
0.27
0.28
0.29
0.28
0.28
0.21
0.23
0.22
0.23
0.22
0.22
0.23
0.26
0.25
0.26
0.25
0.25
0.25
0.29
0.29
0.29
0.28
0.28
Training strategies comparison
Here we compare the generalization performance of the DST-NN under different training strategies.
Since the training time for the DST-NN increases substantially with the size of the hidden layers, in
these tests we consider only hidden layers of size 16 and 32. On the other end, as shown in Table 2, a
hidden layer of size 32 does not limit the generalization performance of our method in comparison to
larger sizes. As in the previous section, we show the performance of the different training strategies
in terms of learning curves (Figure 4) and maximum achievable accuracy in testing (Table 3).
Recall that, according to our general training strategy, when a new network is added to the stack its
initial connection weights are copied from the previous-level network in the stack. Moreover, each
network is trained on exactly the same set of examples. Thus, a natural question is to which extent
the randomization, in terms of both connection weights and training examples, affects the network
learning capabilities. As shown in Figure 4(a)(b), under weight randomization (DST-NN1 ), the DSTNN gets stuck in local minima and the best prediction performance are comparable to those of NN
6
1
1
1
54
54
1
54
1
54
(a) DST-NN
54
1
54
(b) NN
(c) RNN
Figure 3: Predicted contacts at sequence separation ? 6 for the d1igqa domain. In all three figures, the lower triangle shows the native contacts (black dots). The blue and red dots in the upper
triangle represent the correctly (blue) and incorrectly (red) predicted contacts among the N topscored residue pairs, where N is the number of native contacts at sequence separation ? 6. All three
methods achieve 0.6 accuracy on the top L/5 long range contacts.
0.35
0.36
0.34
0.3
Accuracy
Accuracy
0.32
0.25
0.2
0.15
0.1
1
10
20
DST?NN train
DST?NN1 train
DST?NN test
DST?NN1 test
DST?NN2 train
DST?NN2 test
DST?NN3 train
DST?NN3 test
30
40
50
60
Epochs
70
80
0.3
0.28
0.26
0.24
0.22
90
0.2
100
(a) Hidden Layer size 16
1
10
20
30
40
50
60
Epochs
70
80
90
100
(b) Hidden Layer size 32
Figure 4: Learning curves of different training strategies
and RNN (Table 2 and Table 3). On the other hand, under weight randomization, the DST-NN does
not show any sign of overfitting and the training performance is similar to the testing performance,
as for the RNN in the previous section. Conversely, randomized selection of the training examples
(DST-NN2 ) does not affect the performance of the DST-NN. However, this training strategy seems to
be slightly less stable than our general strategy, since the standard deviation of the accuracy over the
ten training/testing sets is slightly higher (data not shown). In these tests, according to our general
training strategy, each network in the stack has been trained for one single epoch. The approach of
training each network for more than one single epoch leads to slightly better accuracy (< 1% of
improvement) at the cost of a larger training time (data not shown).
Another natural issue concerning DST-NNs is whether the depth of the stack affects the generalization capabilities of the model. To assess this issue, we train a new DST-NN by limiting the depth of
the stack to a fixed number of networks and then repeating the training procedure up to 100 epochs
(DST-NN3 ). For this test, we use a limit size of 20 networks, which roughly corresponds to the
interval with highest learning peaks for hidden layer size 16 (see Figure 2). Due to the increased
training time for this model (20 times slower), testing different stack depths is not practical. For this
training strategy, the randomization of the weights for each newly added network in the stack does
not produce any dramatic loss in prediction accuracy, although the performance results are slightly
lower than those obtained by using our general weight initialization strategy (data not shown). As
shown in Figure 4 and Table 3, although more time consuming, this training technique allows an improvement of approximatively 2% points of accuracy with respect to our general training approach
(at least for a hidden layer of size 16). For this reason, restarting the training on a fixed size stack is
more advantageous in terms of prediction performance than having a very deep stack. Unfortunately,
the optimal stack depth is very likely related to the specific classification problem and it cannot be
inferred a priori from the architecture topology.
7
Table 3: Best prediction performance
Method
DST-NN
DST-NN1
DST-NN2
DST-NN3
5
L/5
HL 16
L/10
Best5
L/5
HL 32
L/10
Best5
0.27
0.24
0.27
0.29
0.30
0.27
0.30
0.32
0.33
0.30
0.33
0.35
0.29
0.24
0.29
0.30
0.32
0.27
0.33
0.33
0.35
0.29
0.36
0.37
Concluding remarks
We have presented a novel and general deep machine-learning architecture for contact prediction,
implemented as a stack of Neural Networks NNkij with two spatial dimensions and one temporal
dimension. The stack architecture is used to organize the prediction in such a way that each level
in the stack can receive in input, through the temporal feature vectors, and refine the predictions
produced by the previous stages in the stack. This approach is closer to the characteristics of the
folding process, where the folded state is dynamically attained through a series of local refinements.
While our architecture is not meant to simulate the folding process, the idea to model the contact
prediction in a multi-level fashion seems more natural than the traditional single-shot approach. This
is confirmed by the improved generalization capabilities and accuracy of the DST-NN model, which
have been demonstrated by rigorous comparison against other approaches.
The proposed architecture is somewhat general and it can be adopted as a starting point for more
sophisticate methods for contact prediction or other problems. For instance, while the elementary
learning modules of the architecture are implemented using neural networks, it is clear that these
could be replaced by other models, such as SVMs. Moreover, here we considered a simple square
neighborhood for encoding the contact predictions in the temporal feature vector; more complex
relationships could be discovered by exploiting different topologies for such feature vector . For
example, different secondary structure elements tend to form specific contacting patterns and such
patterns could be directly implemented in one or more specific feature vectors (see, for example, [8]).
Another property of our DST-NN approach is that each level can be trained in supervised fashion.
While we have used the true contact map as the target for all the levels in the architecture, it is clear
that different targets could be used at different levels [3]. For instance, experimental or simulation
data1 on protein folding could be used to generate contact maps at different stages of folding and
use those as targets. Different variations based on these ideas are currently under investigation.
The DST-NN approach is in fact a special case of the DAG-RNN approach described in [2] and
relies on an underlying directed acyclic graph (DAG) to organize the computations. For these reasons, one could also imagine architectures based on a higher-dimensional stack of learning modules,
for instance a stack of the form NNlm
ijk where the spatial coordinates are three-dimensional, and the
?temporal? coordinates are two-dimensional with a connectivity that ensures the absence of directed
cycles (the temporal connections running only from the ?past? towards the ?future?). DST-NNs of
the form NNki , with one spatial and one temporal coordinate, could be applied to sequence problems, for instance to the prediction of secondary structure or relative solvent accessibility. Likewise,
DST-NNs of the form NNlijk , with three spatial and one temporal coordinate, could be applied, for
instance, to problems in weather forecasting [13] or trajectory prediction in robot movements [14].
References
[1] Altschul,S.F., Madden,T.L., Sch?affer,A.A., Zhang,J., Zhang,Z., Miller,W., Lipman, D.J. (1997) Gapped
BLAST and PSI-BLAST: a new generation of protein database search programs, Nucleic Acids Res., 25(17),
3389-3402.
[2] Baldi,P., Pollastri,G. (2003) The Principled Design of Large-Scale Recursive Neural Network
Architectures-DAG-RNNs and the Protein Structure Prediction Problem, Journal of Machine Learning Research, 4, 575-602.
1
http:www.dynameomics.org
8
[3] Baldi,P. (2012) Boolean Autoencoders and Hypercube Clustering Complexity, Designs, Codes, and Cryptography, 65, 383-403.
[4] Bengio,Y., Lamblin,P., Popovici,D., Larochelle,H. (2006) Greedy Layer-Wise Training of Deep Networks.
Proceedings of the 20th Annual Conference on Neural Information Processing Systems (NIPS 2006), 153160.
[5] Bj?orkholm,P., Daniluk,P., Kryshtafovych,A., Fidelis,K., Andersson,R., Hvidsten,T.R. (2009) Using multidata hidden Markov models trained on local neighborhoods of protein structure to predict residue-residue
contacts. Bioinformatics, 25, 1264-1270.
[6] Chandonia,J.M., Hon,G., Walker,N.S., Lo Conte,L., Koehl,P., Levitt, M., Brenner, S.E. (2004) The ASTRAL Compendium in 2004, Nucl. Acids Res. , 32(suppl 1), D189-D192.
[7] Cheng,J., Baldi,P. (2007) Improved residue contact prediction using support vector machines and a large
feature set, BMC Bioinformatics, 8, 113.
[8] Di Lena,P., Nagata,K., Baldi,P. (2012) Deep Architectures for Protein Contact Map Prediction, Bioinformatics, 28, 2449-2457.
[9] Ezkurdia,I., Gra?na,O., Izarzugaza,J.M., Tress,M.L. (2009) Assessment of domain boundary predictions
and the prediction of intramolecular contacts in CASP8, Proteins, 77(suppl 9), 196-209
[10] Farabet,C. Couprie,C., Najman,L., LeCun,Y. (2012) Scene Parsing with Multiscale Feature Learning,
Purity Trees, and Optimal Covers. Proceedings of the 29th International Conference on Machine Learning
(ICML 2012).
[11] Fariselli,P.,Olmea,O.,Valencia,A.,Casadio,R. (2001) Progress in predicting inter-residue contacts of proteins with neural networks and correlated mutations. Proteins 5, 157-162.
[12] Heitz,G., Gould,S., Saxena,A., Koller,D. (2008) Cascaded Classification Models: Combining Models for
Holistic Scene Understanding. Proceedings of the 22nd Annual Conference on Neural Information Processing Systems (NIPS 2008), 641-648.
[13] Hsieh,W. (2009) Machine Learning Methods in the Environmental Sciences: Neural Networks and Kernels. Cambridge University Press, NY, USA.
[14] Jetchev,N., Toussaint,M. (2009) Trajectory prediction: learning to map situations to robot trajectories.
Proceedings of the 26th Annual International Conference on Machine Learning, 449-456.
[15] Kryshtafovych,A., Fidelis,K., Moult,J. (2011) CASP9 results compared to those of previous CASP experiments, Proteins, In press.
[16] Larochelle,H., Bengio,J., Louradour,J., Lamblin,P. (2009) Exploring Strategies for Training Deep Neural
Networks Journal of Machine Learning Research, 10, 1-40.
[17] Murzin,A.G., Brenner,S.E., Hubbard,T., Chothia,C. (1995) SCOP: a structural classification of proteins
database for the investigation of sequences and structures, J. Mol. Biol., 247(4), 536-540.
[18] Pollastri,G., Przybylski,D., Rost,B., Baldi,P. (2002) Improving the prediction of protein secondary structure in three and eight classes using recurrent neural networks and profiles, Proteins, 47(2), 228-235.
[19] Pollastri,G., Baldi,P., Fariselli,P., Casadio,R. (2002) Prediction of Coordination Number and Relative
Solvent Accessibility in Proteins, Proteins, 47(2), 142-153.
[20] Porto,M., Bastolla,U., Roman,H.E., Vendruscolo,M. (2004) Reconstruction of protein structures from a
vectorial representation, Phys. Rev. Lett., 92, 218101.
[21] Punta,M., Rost,B. (2005) PROFcon: novel prediction of long-range contacts, Bioinformatics, 21, 29602968.)
[22] Ross,S., Munoz,D., Hebert,M., Bagnell,J.A. (2011) Learning message-passing inference machines for
structured prediction, Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, 2737-2744.
[23] Sathyapriya,R., Duarte,J.M., Stehr,H., Filippis,I., Lappe,M. (2009) Defining an Essence of Structure Determining Residue Contacts in Proteins. PLoS Comput Biol, 5(12), e1000584.
[24] Shackelford,G., Karplus, K. (2007) Contact prediction using mutual information and neural nets.Proteins,
69,159-164.
[25] Tress,M.L., Valencia,A. (2010) Predicted residue-residue contacts can help the scoring of 3D models.
Proteins, 78(8), 1980-1991.
[26] Vassura,M., Margara,L., Di Lena,P., Medri,F., Fariselli,P. , Casadio,R. (2008) FT-COMAR: fault tolerant
three-dimensional structure reconstruction from protein contact maps. Bioinformatics, 24, 1313-1315.
[27] Zhang,Y. (2008) Progress and challenges in protein structure prediction. Curr Opin Struct Biol., 18(3),
342-348.
9
| 4526 |@word repository:1 private:1 achievable:1 seems:2 advantageous:1 nd:1 simulation:3 r:1 hsieh:1 dramatic:1 shot:2 initial:1 series:1 selecting:1 tuned:1 past:1 current:2 recovered:1 parsing:1 refines:2 informative:1 enables:1 opin:1 designed:1 progressively:1 greedy:1 selected:1 vanishing:4 short:3 provides:3 coarse:1 sigmoidal:1 org:1 casp:5 zhang:3 intramolecular:1 beta:3 nnk:2 consists:2 baldi:7 introduce:3 blast:3 pairwise:1 inter:1 roughly:3 growing:2 multi:2 lena:3 window:2 considering:2 increasing:1 rebalancing:1 estimating:1 underlying:2 moreover:3 medium:1 backbone:1 sist:1 substantially:1 developed:1 ought:2 temporal:27 every:1 multidimensional:1 saxena:1 exactly:4 unit:2 organize:3 positive:4 local:4 tends:2 limit:2 despite:3 encoding:2 becoming:1 black:1 rnns:1 initialization:1 dynamically:1 conversely:2 vendruscolo:1 range:19 averaged:1 directed:2 practical:1 lecun:1 testing:7 recursive:2 procedure:6 rnn:18 nnlm:1 weather:1 integrating:1 affer:1 protein:44 suggest:1 get:1 cannot:2 selection:2 sheet:1 layered:1 storage:1 context:1 www:1 equivalent:3 map:19 demonstrated:1 missing:2 murzin:1 regardless:2 starting:2 independently:1 counterintuitive:1 lamblin:2 coordinate:7 variation:3 limiting:1 target:8 play:1 imagine:1 element:1 recognition:1 particularly:1 native:5 database:4 role:1 module:4 ft:1 capture:3 region:1 ensures:1 cycle:1 plo:1 movement:1 highest:1 principled:1 complexity:1 dynamic:1 gapped:1 trained:13 depend:2 exposed:1 purely:4 completely:2 triangle:2 easily:2 train:8 separated:2 distinct:3 effective:1 neighborhood:8 choosing:1 quite:1 widely:3 supplementary:1 larger:6 casp9:1 final:1 sequence:16 net:1 reconstruction:2 maximal:1 uci:1 combining:1 date:1 holistic:1 achieve:2 exploiting:1 produce:1 help:1 depending:1 recurrent:2 conformation:1 progress:2 strong:1 implemented:5 predicted:12 larochelle:2 radius:2 porto:1 filter:1 centered:2 material:1 generalization:11 investigation:2 randomization:5 elementary:1 exploring:1 initio:3 astral:3 sufficiently:1 around:1 ic:1 proximity:3 considered:1 predict:3 bj:1 major:1 achieves:1 layering:1 purpose:1 estimation:1 applicable:1 currently:1 coordination:1 ross:1 hubbard:1 instantaneously:1 rough:1 aim:1 modified:1 rather:5 derived:1 focus:2 release:1 improvement:5 contrast:3 rigorous:2 nn3:4 duarte:1 inference:1 nn:46 inaccurate:1 typically:2 entire:1 hidden:29 koller:1 buried:1 issue:5 classification:4 overall:2 among:1 hon:1 priori:1 spatial:19 art:4 initialize:1 special:1 mutual:1 equal:3 once:1 field:1 having:1 lipman:1 atom:2 bmc:1 biology:1 progressive:1 represents:4 identical:1 icml:1 future:1 piecewise:1 roman:1 primarily:1 randomly:4 replaced:1 ab:3 attempt:5 detection:2 curr:1 message:1 investigate:1 highly:1 evaluation:2 alignment:1 accurate:1 closer:1 explosion:1 tree:1 euclidean:1 initialized:1 re:2 karplus:1 minimal:1 increased:3 instance:7 modeling:3 boolean:1 contiguous:1 tp:3 cover:1 cost:1 addressing:1 subset:1 deviation:1 predictor:5 uniform:1 too:2 nns:4 fundamental:2 grand:1 randomized:1 peak:1 international:2 na:1 connectivity:1 again:1 reflect:1 suggesting:1 account:2 exclude:1 fariselli:3 scop:4 performed:1 apparently:1 red:2 start:1 nagata:2 recover:2 capability:9 mutation:1 ass:2 square:1 accuracy:28 degraded:1 acid:4 largely:1 characteristic:1 likewise:1 yield:1 correspond:1 miller:1 produced:5 trajectory:3 confirmed:1 unaffected:1 acc:3 phys:1 suffers:1 sharing:1 farabet:1 definition:2 against:2 pollastri:3 di:3 psi:2 static:1 irvine:1 newly:1 proved:1 recall:1 realm:1 knowledge:1 back:3 feed:1 higher:3 attained:1 supervised:5 improved:2 done:1 evaluated:1 furthermore:3 just:1 stage:2 correlation:3 overfit:1 hand:2 receives:1 autoencoders:1 multiscale:1 assessment:5 propagation:3 incrementally:1 quality:1 usa:1 homology:3 true:4 hence:2 spatially:2 essence:1 criterion:1 evident:1 image:1 wise:1 instantaneous:1 novel:4 consideration:1 superior:1 rotation:2 data1:1 physical:1 overview:1 exponentially:1 nn2:4 belong:1 significant:1 cambridge:1 munoz:1 dag:3 dot:2 stable:1 robot:2 similarity:3 altschul:1 certain:1 binary:3 fault:1 sophisticate:1 scoring:1 minimum:1 additional:1 somewhat:2 purity:1 determine:1 shortest:1 redundant:1 sliding:1 multiple:1 full:1 exceeds:1 cross:3 long:13 concerning:1 prediction:64 variant:1 essentially:2 vision:2 iteration:1 represent:2 kernel:1 suppl:2 achieved:1 folding:7 receive:1 residue:37 addressed:1 interval:3 walker:1 leaving:1 source:1 sch:1 ineffective:1 tend:1 valencia:2 contrary:1 structural:3 bengio:2 affect:4 chothia:1 pfbaldi:1 architecture:32 topology:4 perfectly:1 incomparable:1 idea:5 lappe:1 whether:1 effort:2 forecasting:1 passing:1 remark:2 deep:20 useful:2 generally:2 clear:2 repeating:2 ten:2 svms:1 ken:1 generate:1 http:1 exist:1 notice:2 dotted:1 sign:2 delta:2 arising:1 correctly:2 per:1 disjoint:1 blue:2 affected:1 group:2 conn:3 cutoff:1 graph:1 pietro:1 year:1 casadio:3 dst:45 gra:1 family:1 separation:8 comparable:1 layer:35 copied:1 fold:2 nni:6 cheng:1 refine:3 annual:3 vectorial:1 precisely:1 scene:2 conte:1 solvent:5 explode:1 compendium:1 simulate:2 concluding:1 gould:1 department:1 structured:1 according:2 poor:1 belonging:1 smaller:1 slightly:4 partitioned:1 rev:1 hl:4 explained:1 invariant:2 discus:2 needed:1 fed:1 end:1 adopted:2 eight:1 appropriate:1 pierre:1 rost:2 struct:1 slower:1 top:3 running:1 include:3 clustering:1 exploit:1 classical:1 threedimensional:1 contact:94 hypercube:1 added:2 contacting:5 question:1 receptive:1 strategy:12 usual:1 nr:1 traditional:1 bagnell:1 evolutionary:2 gradient:6 distance:1 mapped:1 capacity:1 accessibility:5 extent:1 reason:2 length:4 code:1 index:3 relationship:1 providing:1 difficult:2 setup:1 unfortunately:1 negative:2 implementation:3 design:3 unknown:1 perform:2 upper:1 nucleic:1 markov:1 benchmark:1 descent:1 najman:1 incorrectly:1 situation:1 defining:1 communication:1 discovered:1 stack:35 inferred:2 overcoming:1 introduced:1 pair:16 trainable:1 connection:6 california:1 learned:1 established:1 nip:2 address:1 able:2 pattern:3 fp:2 challenge:2 summarize:1 program:1 reliable:1 including:1 natural:3 predicting:1 cascaded:1 nucl:1 representing:1 madden:1 extract:1 genomics:1 epoch:23 popovici:1 understanding:1 determining:1 relative:2 fully:2 loss:1 przybylski:1 generation:1 acyclic:1 toussaint:1 validation:2 systematically:1 helix:1 share:1 translation:2 lo:1 keeping:1 hebert:1 infeasible:1 institute:1 template:2 sparse:2 distributed:2 curve:6 dimension:3 depth:7 plain:1 overcome:1 boundary:3 heitz:1 lett:1 forward:1 commonly:1 refinement:2 author:1 stuck:1 alpha:3 restarting:1 unreliable:1 global:1 overfitting:4 tolerant:1 spatio:2 consuming:1 continuous:1 search:1 table:13 scratch:1 tress:2 learn:1 mol:1 improving:2 necessarily:1 complex:1 domain:14 louradour:1 main:2 dense:1 scored:2 profile:4 repeated:1 cryptography:1 amino:3 levitt:1 representative:1 fashion:6 ny:1 sub:1 comput:1 vanish:1 specific:4 showing:1 list:2 experimented:1 intrinsic:2 false:1 adding:1 cartesian:1 likely:2 visual:2 prevents:1 approximatively:1 corresponds:1 environmental:1 relies:1 coil:1 viewed:1 identity:1 towards:1 couprie:1 room:2 nn1:4 absence:1 considerable:2 content:1 hard:1 included:2 typical:3 except:1 folded:1 brenner:2 total:3 andersson:1 secondary:7 accepted:2 experimental:1 ijk:1 rarely:1 select:2 uneven:1 support:1 assessed:2 meant:3 bioinformatics:7 preparation:1 tested:2 biol:3 correlated:3 |
3,896 | 4,527 | Synchronization can Control Regularization in
Neural Systems via Correlated Noise Processes
Jake Bouvrie
Department of Mathematics
Duke University
Durham, NC 27708
[email protected]
Jean-Jacques Slotine
Nonlinear Systems Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02138
[email protected]
Abstract
To learn reliable rules that can generalize to novel situations, the brain must be capable of imposing some form of regularization. Here we suggest, through theoretical and computational arguments, that the combination of noise with synchronization provides a plausible mechanism for regularization in the nervous system. The
functional role of regularization is considered in a general context in which coupled computational systems receive inputs corrupted by correlated noise. Noise on
the inputs is shown to impose regularization, and when synchronization upstream
induces time-varying correlations across noise variables, the degree of regularization can be calibrated over time. The resulting qualitative behavior matches
experimental data from visual cortex.
1
Introduction
The problem of learning from examples is in most circumstances ill-posed. This is particularly true
for biological organisms, where the ?examples? are often complex and few in number, and the ability to adapt is a matter of survival. Theoretical work in inverse problems has long established that
regularization restores well-posedness [5, 20] and furthermore, implies stability and generalization
of a learned rule [2]. How the nervous system imposes regularization is not entirely clear, however.
Bayesian theories of learning and decision making [14, 12, 29] hold that that brain is able to represent prior distributions and assign (time-varying) uncertainty to sensory measurements. By way of
a Bayesian integration, the brain may effectively work with hypothesis spaces of limited complexity
when appropriate, trading off prior knowledge against new evidence [9]. But while these mechanisms can effect regularization, it is still not clear how to calibrate it: when to cease adaptation or
how to fix a hypothesis space suited to a given task. A second possible explanation is that regularization ? and a representation of uncertainty ? may emerge naturally due to noise. Intuitively, if noise
is allowed to ?smear? observations presented to a learning apparatus, overfitting may be mitigated ?
a well known phenomenon in artificial neural networks [1].
In this paper we argue that noise provides an appealing, plausible mechanism for regularization in
the nervous system. We consider a general context in which coupled computational circuits subject
to independent noise receive common inputs corrupted by spatially correlated noise. Information
processing pathways in the mammalian visual cortex, for instance, fall under such an organizational
pattern [10, 24, 7]. The computational systems in this setting represent high-level processing stages,
downstream from localized populations of neurons which encode sensory input. Noise correlations
in the latter arise from, for instance, within-population recurrent connections, shared feed-forward
inputs, and common stimulus preferences [24]. Independent noise impacting higher-level computational elements may arise from more intrinsic, ambient neuronal noise sources, and may be roughly
independent due to broader spatial distribution [6].
To help understand the functional role of noise in inducing regularization, we propose a high-level
model that can explain quantitatively how noise translates into regularization, and how regularization
may be calibrated over time. The ability to adjust regularization is key: as an organism accumulates
1
experience, its models of the world should be able to adjust to the complexity of the relationships
and phenomena it encounters, as well as reconcile new information with prior probabilities. Our
point of view is complementary to Bayesian theories of learning; the representation and integration
of sensory uncertainty is closely related to a regularization interpretation of learning in ill-posed
settings. We postulate that regularization may be plausibly controlled by one of the most ubiquitous
mechanisms in the brain: synchronization. A simple, one-dimensional regression (association) problem in the presence of both independent ambient noise and correlated measurement noise suffices to
illustrate the core ideas.
When a learner is presented with a collection of noisy observations, we show that synchronization may be used to adjust the dependence between observational noise variables, and that this
in turn leads to a quantifiable change in the degree of regularization imposed upon the learning
task. Regularization is further shown to both improve the convergence rate towards the solution
to the regression problem, and reduce the negative impact of ambient noise. The model?s qualitative behavior coincides with experimental data from visual tracking tasks [10] (area MT) and from
anesthetized animals [24] (area V1), in which correlated noise impacts sensory measurements and
correlations increase over short time scales. Other experiments involving perceptual learning tasks
have shown that noise correlations decrease with long-term training [8]. The mechanism we propose
suggests that changes in noise correlations arising from feedback synchronization can calibrate regularization, possibly leading to improved convergence properties or better solutions. Collectively,
the experimental evidence lends credence to the hypothesis that, at a high level, the brain may be optimizing its learning processes by adapting dependence among noise variables, with regularization
an underlying computational theme.
Lastly, we consider how continuous dynamics solving a given learning problem might be efficiently
computed in cortex. In addition to supporting regularization, noise can be harnessed to facilitate
distributed computation of the gradients needed to implement a dynamic optimization process. Following from this observation, we analyze a stochastic finite difference scheme approximating derivatives of quadratic objectives. Difference signals and approximately independent perturbations are
the only required computational components. This distributed approach to the implementation of
dynamic learning processes further highlights a connection between parallel stochastic gradient descent algorithms [25, 15, 28], and neural computation.
2
Learning as noisy gradient descent on a network
The learning process we will consider is that of a one-dimensional linear fitting problem described
by a dynamic gradient based minimization of a square loss objective, in the spirit of Rao & Ballard [21]. This is perhaps the simplest and most fundamental abstract learning problem that an
organism might be confronted with ? that of using experiential evidence to infer correlations and
ultimately discover causal relationships which govern the environment and which can be used to
make predictions about the future. The model realizing this learning process is also simple, in
that we capture neural communication as an abstract process ?in which a neural element (a single
neuron or a population of neurons) conveys certain aspects of its functional state to another neural
element? [22]. In doing so, we focus on the underlying computations taking place in the nervous
system rather than particular neural representations. The analysis that follows, however, may be
extended more generally to multi-layer feedback hierarchies.
To make the setting more concrete, assume that we have observed a set of input-output examples
{xi ? R, yi ? R}m
i=1 , with each xi representing a generic unit of sensory experience, and want to
estimate the linear regression function fw (x) = wx (we assume the intercept is 0 for simplicity).
Adopting the square loss, the total prediction error incurred on the observations by the rule fw is
given by
m
m
X
X
(yi ? fw (xi ))2 = 21
(yi ? wxi )2 .
(1)
E(w) = 12
i=1
i=1
Note that there is no explicit regularization penalty here. We will model adaptation (training) by
a noisy gradient descent process on this squared prediction P
error loss function. The gradient of E
m
with respect to the slope parameter is given by ?w E = ? i=1 (yi ? wxi )xi , and generates the
continuous-time, noise-free gradient dynamics
w? = ??w E(w).
(2)
The learning dynamics we will consider, however, are assumed to be corrupted by two distinct kinds
of noise:
2
(N1) Sensory observations (xi )i are corrupted by time-varying, correlated noise processes.
(N2) The dynamics are themselves corrupted by additive ?ambient? noise.
To accommodate (N1) we will borrow an averaging or, homogenization, technique for multi-scale
systems of stochastic differential equations (SDEs) that will drastically simplify analysis. We have
discussed the origins of (N1) above. The noise (N2) may be significant (we do not take small noise
limits) and can be attributed to some or all of: error in computing and sensing a gradient, intrinsic
neuronal noise [6] (aggregated or localized), or interference between large assemblies of neurons or
circuits.
Synchronization among circuits and/or populations will be modeled by considering multiple coupled
dynamical systems, each receiving the same noisy observations. Such networks of systems capture
common pooling or averaging computations, and provides a means for studying variance reduction.
The collective enhancement of precision hypothesis suggests that the nervous system copes with
noise by averaging over collections of signals in order to reduce variation in behavior and improve
computational accuracy [23, 13, 26, 3]. Coupling synchronizes the collection of dynamical systems
so that each tends to a common ?consensus? trajectory having reduced variance. If the coupling is
strong enough, then the variance of the consensus trajectory decreases as O(1/n) after transients,
if there are n signals or circuits [23, 17, 19, 3]. We will consider regularization in the context of
networks of coupled SDEs, and investigate the impact of coupling, redundancy (n) and regularization upon the convergence behavior of the system. Considering networks will allow a more general
analysis of the interplay between different mechanisms for coping with noise, however n can be
small or 1 in some situations.
Formally, the noise-free flow (2) can be modified to include noise sources (N1) and (N2) as follows.
Noise (N1) may be modeled as a white-noise limit of Ornstein-Uhlenbeck (OU) processes (Zt )i ,
and (N2) as an additive diffusive noise term. In differential form, we have
dwt = ? wt kx + Zt k2 ? hx + Zt , yi dt + ?dBt
(3a)
?
i
2?
Z
i = 1, . . . , m.
(3b)
dZti = ? t dt + ? dBti ,
?
?
Here, Bt denotes the standard 1-dimensional Brownian motion and captures noise source (N2). The
observations (x)i = xi are corrupted by the noise processes (Zt )i = Zti , following (N1). For the
moment, the Zti are independent, but we will relax this assumption later. The parameter 0 < ? 1
controls the correlation time of a given noise process. In the limit as ? ? 0, Zti may be viewed as a
family of independent zero-mean Gaussian random variables indexed by t. Characterizing the noise
Zt as (3b) with ? ? 0 serves as both a modeling approximation/idealization and an analytical tool.
2.1
Homogenization
The system (3a)-(3b) above is a classic ?fast-slow? system: the gradient descent trajectory wt
evolves on a timescale much longer than the O(?) stochastic perturbations Zt . Homogenization
considers the dynamics of wt after averaging out the effect of the fast variable Zt . In the limit as
? ? 0 in (3b), the solution to the averaged SDE converges (in a sense to be discussed below) to the
solution of the original SDE (3a).
The following Theorem is an instance of [18, Thm. 3], adapted to the present setting.
Theorem 2.1. Let 0 < ? 1, ?, ? > 0 and let X , Y denote finite-dimensional Euclidean spaces.
Consider the system
dx = f (x, y)dt + ?dWt ,
dy = ?
?1
g(y)dt + ?
?1/2
?dBt ,
x(0) = x0
(4a)
y(0) = y0 ,
(4b)
where x ? X , y ? Y, and Wt ? X , Bt ? Y are independent multivariate Brownian motions.
Assume that for all x ? X , y ? Y the following conditions on (4) hold:
hg(y), y/kyki ? ?rkyk? ,
kf (x, y) ? f (x0 , y)k ? C(y)kx ? x0 k
kf (x, y)k ? K(1 + kxk)(1 + kykq ),
with r > 0, ? ? 0, q < ?, and where C(y) is a constant depending on y. If the SDE (4b) is ergodic,
then there exists a unique invariant measure ?? characterizing the probability distribution of yt in
3
the steady state, and we may define the vector field F (x) , E?? [f (x, y)] =
Furthermore, x(t) solving (4a) is closely approximated by X(t) solving
dX = F (X)dt + ?dWt ,
R
Y
f (x, y)?? (dy).
X(0) = x0
in the sense that, for any t ? [0, T ], x(t) ? X(t) in C([0, T ], X ) as ? ? 0.
It may be readily shown that the system (3) satisfies the conditions of Theorem 2.1. Moreover, the
OU process (3b) on Rm is known to be ergodic with stationary distribution Z? ? N (0, ? 2 I) (see
e.g. [11]), where N (?, ?) denotes the multivariate Gaussian distribution with mean ? and covariance
?. Averaging over the fast variable Zt appearing in (3a) with respect to this distribution gives
dwt = ? wt (kxk2 + m? 2 ) ? hx, yi dt + ?dBt ,
(5)
and by Theorem 2.1, we can conclude that Equation (5) well-approximates (3a) when ? ? 0 in (3b)
in the sense of weak convergence of probability measures.
2.2
Network structure
Now consider n ? 1 diffusively coupled neural
systems implementing the dynamics (5), with associated parameters w(t) = w1 (t), . . . , wn (t) . If Wij ? 0 is the coupling strength between systems
i and j, L = diag(W 1) ? W is the network Laplacian [16]. We assume here that L is symmetric
and defines a connected network graph. Letting ? := kxk2 + m? 2 , ? := hx, yi and ? := (?/?)1,
the coupled system can be written concisely as
dwt = ?(L + ?I)wt dt + ?1dt + ?dBt
= (L + ?I)(? ? wt )dt + ?dBt ,
(6)
with Bt an n-dimensional Brownian motion. The diffusive couplings here should be interpreted
as modeling abstract intercommunication between and among different neural circuits, populations,
or pathways. In such a general setting, diffusive coupling is a natural and mathematically tractable
choice that can capture the key, aggregate aspects of communication among neural systems. Note
that one can equivalently consider n systems (3a) and then homogenize assuming n copies of the
(i)
same noise process Zt , or n independent noise processes {Zt }i ; either choice also leads to (6).
3
Learning with noisy data imposes regularization
Equation (6) is seen by inspection to be an OU process, and has solution (see e.g. [11])
Z t
w(t) = e?(L+?I)t w(0) + I ? e?(L+?I)t ? + ?
e?(L+?I)(t?s) dBs .
(7)
0
Integrals of Brownian motion are normally distributed, so w(t) is a Gaussian process and can
be
characterized entirely by its time-dependent mean and covariance, w(t) ? N ?w (t), ?w (t) . A
straightforward manipulation (details omitted due to lack of space) gives
?w (t) : = E[w(t)] = e?(L+?I)t E[w(0)] + I ? e?(L+?I)t ?
(8)
h
>i
?w (t) : = E w(t) ? E w(t) w(t) ? E w(t)
?2
(L + ?I)?1 I ? e?2(L+?I)t .
2
The solution to the noise-free regression problem (minimizing (1)) is given by w? = hx, yi/kxk2 ,
however (7) together with (8) reveals that, for any i ? {1, . . . , n},
= e?(L+?I)t E[w(0)w(0)>]e?(L+?I)t +
t??
E[wi (t)] ???? (?)i =
hx, yi
kxk2 + m? 2
(9)
which is exactly the solution to the regularized regression problem
min ky ? wxk2 + ?w2
w?R
with regularization parameter ? := m? 2 . To summarize, we have considered a network of coupled, noisy gradient flows implementing unregularized linear regression. When the observations
x are noisy, all elements of the network converge in expectation to a common equilibrium point
representing a regularized solution to the original regression problem.
4
3.1
Convergence behavior
In the previous section we showed that the network converges to the solution of a regularized regression problem, but left open a few important questions: What determines the convergence rate?
How does the noise (N1),(N2) impact convergence? How does coupling and redundancy (number
of circuits n) impact convergence? How do these quantities affect the variance of the error? We can
e
address these questions by decomposing w(t) into orthogonal components, w(t) = w(t)1
?
+ w(t),
e = w ? w1.
?
representing the mean-field trajectory w
? = n1 1>w, and fluctuations about the mean w
We may then study the error
2
e
E n1 kw(t) ? ?k2 = E n1 kw(t)k
+ E n1 kw(t)1
?
? ?k2
(10)
by studying each term separately. Decomposing the error into fluctuations about the average and the
distance between the average and the noise-free equilibrium allows one to see that there are actually
two different convergence rates governing the system: one determines convergence towards the syne = 0), and the another determines convergence to the equilibrium
chronization subspace (where w
point ?. The following result provides quantitative answers to the questions posed above:
e C be constants which do not depend on time, and let ? denote the smallest
Theorem 3.1. Let C,
non-zero eigenvalue of L. Set ? := kxk2 + m? 2 and ? := (hx, yi/?)1, as before. Then for all
t > 0,
1
1
1
?2
2
?2(?+?)t
?2?t
e
+
.
(11)
E n kw(t) ? ?k ? Ce
+ Ce
+
2 ? + ? ?n
A proof is given in the supplementary material. The first term of (11) estimates the transient part
of the fluctuations term in (10), and we find that the rate of convergence to the synchronization
subspace is 2(? + ?). The second term term estimates the transient part of the centroid?s trajectory,
and we see that the rate of convergence of the mean trajectory to equilibrium is 2?. In the presence of
noise, however, the system will neither synchronize nor reach equilibrium exactly. After transients,
we see that the residual error is given by the last term in (11). This term quantifies the steady-state
interaction between: gradient noise (?); regularization (?, via the observation noise ?); network
topology (via ?), coupling strength (via ?), and redundancy (n; possibly ?).
3.2
Discussion
From the results above we can draw a few conclusions about networks of noisy learning systems:
1. Regularization improves both the synchronization rate and the rate of convergence to equilibrium.
2. Regularization contributes towards reducing the effect of the gradient noise ?: (N1) counteracts
(N2).
3. Regularization changes the solution, so we cannot view regularization as a ?free-parameter? that
can be used solely to improve convergence or reduce noise. Faster convergence rates and noise
reduction should be viewed as beneficial side-effects, while the appropriate degree of regularization primarily depends on the learning problem at hand.
4. The number of circuits n and the coupling strength contribute towards reducing the effect of the
gradient noise (N2) (that is, the variance of the error) and improve the synchronization rate, but
do not affect the rate of convergence toward equilibrium.
5. Coupling strength and redundancy cannot be used to control the degree of regularization, since
the equilibrium solution ? does not depend on n or the spectrum of L. This is true no matter how
the coupling weights Wij are chosen, since constants will always be in the null space of L and ?
is a constant vector.
In the next section we will show that if the noise processes {Zti }i are themselves trajectories of
a coupled network, then synchronization can be a mechanism for controlling the regularization
imposed on a learning process.
4
Calibrating regularization with synchronization
If instead of assuming independent noise processes corrupting the data as in (3b), we consider correlated noise variables (Zti )m
i=1 , it is possible for synchronization to control the regularization which
the noise imposes on a learning system of the form (3a). A collection of dependent observational
noise processes is perhaps most conveniently modeled by coupling the OU dynamics (3b) introduced
5
before through another (symmetric) network Laplacian Lz :
?
1
2?
dZt = ? (Lz + ?I)Zt dt + ? dBt ,
(12)
?
?
for some ? > 0. We now have two networks: the first network of gradient systems is the same as
before, but the observational noise process Zt is now generated by another network. For purposes
of analysis, this model suffices to capture generalized correlated noise sources. In the actual biology, however, correlations may arise in a number of possible ways, which may or may not include
diffusively coupled dynamic noise processes.
To analyze what happens when a network of learning systems (3a) is driven by observation noise of
the form (12), we take an approach similar to that of the previous Section. The first step is again
homogenization. The system (12) may be viewed as a zero-mean variation of (6), and its solution
Zt ? N ?z (t), ?z (t) is a Gaussian process characterized by
?z (t) = e?(Lz +?I)t/? E[Z(0)]
?z (t) = e
?(Lz +?I)t/?
(13a)
> ?(Lz +?I)t/?
E[Z(0)Z(0) ]e
2
?1
+ ? (Lz + ?I)
2
I ?e
?2(Lz +?I)t/?
Taking t ? ? in (13) yields the stationary distribution ?? = N 0, ? (Lz + ?I)
consider (3a) defined with Zt governed by (12), and average with respect to ?? :
n
o
dwt = ? E?? wt kx + Zt k2 ? hx + Zt , yi dt + ?dBt
h
i
= ? wt kxk2 + ? 2 tr(Lz + ?I)?1 ? hx, yi dt + ?dBt
?1
. (13b)
. We can now
where we have used that E[kZt k2 ] = ? 2 tr(Lz + ?I)?1 . As before, the averaged approximation is
good when ? ? 0. An expression identical to (6),
dwt = (L + ?I)(? ? wt )dt + ?dBt
(14)
is obtained by redefining ? := kxk2 + ? 2 tr(Lz + ?I)?1 and ? := (hx, yi/?)1. In this case,
? = ? ? kxk2 = ? 2 tr(Lz + ?I)?1 .
Theorem 3.1 may be immediately applied to understand (14). As before, the covariance of Zt
figures into the regularization parameter. However now the covariance of Zt is a function of the
network Laplacian Lz = Lz (t), which is defined by the topology and potentially time-varying
coupling strengths of the noise network. By adjusting the coupling in (12), we adjust the regularization ? imposed upon (14). When coupling increases, the dependence among the Zti increases and
tr(Lz + ?I)?1 (and therefore ?) decreases. Thus, increased correlation among observational noise
variables implies decreased regularization.
In the case of all-to-all coupling with uniform strength ? ? 0, for example, Lz has eigenvalues
0 = ?0 < ?1 = ? ? ? = ?m = m?. The regularization may in this case range over the interval
?
m
1
= sup tr(Lz + ?I)?1
inf tr(Lz + ?I)?1 = < 2 ?
?
?
?
?
?
by adjusting the coupling strength ? ? [0, ?). Note that all-to-all coupling may be plausibly implemented with O(n) connections using mechanisms such as quorum sensing (see [3, ?2.3], [27]).
5
Distributed computation with noise
We have argued that noise can serve as a mechanism for regularization. Noise may also be harnessed, in a different sense, to compute dynamics of the type discussed above. The distributed
nature of the mechanism we will explore adheres to the general theme of parallel computation in the
brain, and provides one possible explanation for how the gradients introduced previously might be
estimated. The development is closely related to stochastic gradient descent (SGD) ideas appearing
in stochastic approximation [25, 15] and adaptive optics [28].
5.1
Parallel stochastic gradient descent
Let J(u) : Rd ? R be a locally Lipschitz Lyapunov cost functional we wish to minimize with
respect to some set of control signals u(t) ? Rd . Gradient descent on J can be described by the
collection of flows
?J
dui (t)
= ??
(u1 , . . . , ud ),
i = 1, . . . , d.
dt
?ui
6
We consider the case where the gradients above are estimated via finite difference approximations
of the form
?J(u)
J(u1 , . . . , ui + ?ui , . . . , ud ) ? J(u1 , . . . , ui , . . . , ud )
?
,
?ui
?ui
where ?ui is a small perturbation applied to the i-th input. Parallel stochastic gradient descent
(PSGD, see e.g. [28]) involves applying i.i.d. stochastic perturbations ?ui simultaneously to all inputs in parallel, so that the gradients ?i J(u) are estimated as
?J(u)
? ?J?ui ,
?ui
i = 1, . . . , d
(15)
where ?J = J(u1 +?u1 , . . . , ui +?ui , . . . , ud +?ud )?J(u1 , . . . , ui , . . . , ud ). If ?ui are symmetric
random variables with mean zero and variance ? 2 , then ? ?2 E[?J?ui ] is accurate to O(? 2 ) [28].
5.2
Stochastic gradient model
The parallel finite difference approximation (15) suggests a more biologically plausible mechanism
for implementing gradient dynamics. If the perturbations ?ui are taken to be Gaussian i.i.d. random
variables, we can model parallel stochastic gradient descent as an Ito process:
dut = ?? J(ut + Zt ) ? J(ut ) Zt dt,
u(0) = u0
(16a)
?
1
Z(0) = z0
(16b)
dZt = ? Zt dt + ? dBt ,
?
?
where Bt is a standard d-dimensional Brownian motion. Additive noise affecting the gradient has
been omitted from (16a) for simplicity, and does not change the fundamental results discussed in
this section. The perturbation noise Zt has again been modeled as a white-noise limit of OrnsteinUhlenbeck processes (16b). When ? ? 0, Equation (16a) implements PSGD using the approximation given by Equation (15) with ?ui zero-mean i.i.d. Gaussian random variables.
We will proceed with an analysis of (16) in the particular case where J is chosen from the quadratic
family of cost functionals of the form J(u) = u>Au where A is a symmetric, bounded and strictly
positive definite matrix1 . In this setting the analysis is simpler and suffices to illustrate the main
points. This cost function satisfies minu?Rd J(u) = 0 with minimizer u? = 0, and J is a Lyapunov
function. Equation (16a) now takes the form
>
dut = ?? 2u>
u(0) = u0 .
(17)
t AZt + Zt AZt Zt dt,
5.3
Convergence of continuous-time PSGD with quadratic cost
We turn to studying the convergence behavior of (17) and the precise role of the stochastic perturbations Zt used to estimate the gradients. These perturbations must be small in order to obtain
accurate approximations of the gradients. However, one may also expect that the noise will play
an important role in determining convergence properties since it is the noise that ultimately kicks
the system ?downhill? towards equilibrium. Homogenizing (17) with respect to Zt leads to the
following Theorem, the proof of which is given in the supplementary material.
Theorem 5.1. For any 0 ? t ? T < ?, the solution u(t) to (17) satisfies
lim E[u(t)] = e???
??0
2
At
u(0).
(18)
It is clear from this result that the PSGD system (16), for ? ? 0, converges in expectation globally
and exponentially to the minimum of J when J is a positive definite quadratic form. Our earlier
intuition that the perturbation noise ? should play a role in the rate of convergence is also confirmed:
greater noise amplitudes lead to faster convergence. However this comes at a price. The covariance
of u(t) after transients is exactly the covariance of Zt . Thus an inherent tradeoff between speed and
accuracy must be resolved by any organism implementing PSGD-like mechanisms.
1
Without loss of generality we may assume A is symmetric since the antisymmetric part does not contribute
to the quadratic form. In addition, objectives of the form u>Au+b>u+c may be expressed in the homogeneous
form u>Au by a suitable change of variables.
7
4
No ise amplitude ?(t)
w(t)
w(t)
5
0
2
0
?5
0
0.5
1
1.5
2
2.5
time (s)
3
3.5
4
4.5
Coupling strength ?z (t)
0
5
0.5
1
1.5
2
2.5
time (s)
3
3.5
4
4.5
5
2
2.5
time (s)
3
3.5
4
4.5
5
2
2.5
time (s)
3
3.5
4
4.5
5
3
Steady-state solution ?(t)
w(t)
?
w(t)
?
0
?2
Steady-state solution ?(t)
2
1
?4
0
0.5
1
1.5
2
3
3.5
4.5
0
5
0
0.5
1
1.5
1
Total Error
Fluctuations Error
2
0
4
0
0.5
1
1.5
2
2.5
time (s)
3
3.5
4
4.5
Error
Erro r
4
2.5
time (s)
0
5
Total Error
Fluctuations Error
0.5
0
0.5
1
1.5
Figure 1: (Left stack) Increased observation noise imposes greater regularization, and leads to a reduction
in ambient noise. (Right stack) Stronger coupling/correlation between observation noise processes decreases
regularization. See text for details.
6
Simulations
We first simulated a network of gradient dynamics with uncoupled observation noise processes obeying (3). To illustrate the effect of increasing observation noise variance, the parameter ? in (3b) was
increased from 0.5 to 7 along a monotonic, sigmoidal path over the duration of the simulation. We
used n = 5 systems (3a) with ? = 4, coupled all-to-all with uniform strength ? = 2. Observations were sampled according to (x)i ? N (0, 0.04), (y)i ? Uniform[0, 20] with m = 20 entries,
once and for all, at the beginning of the experiment. Initial conditions were drawn according to
w(0) ? Uniform[?3, 3], and Z(0) was set to 0. Figure 1 (left three plots) verifies some of main
conclusions of Section 3.2. The top plot shows the sample paths w(t) and time course of the observational noise deviation ?(t) (grey labeled trace). When the noise increases near t = 2.5s, a dramatic drop in the variance of w(t) is visible. The middle plot shows the center of mass (mean-field)
trajectory w(t)
?
superimposed upon the time-varying noise-free solution ?(t) (gray labeled trace).
Because the observation noise is increasing, the regularization ? = m? 2 increases and the solution
?(t) to the regularized problem decreases in magnitude following (9). The bottom plot shows the
mean-squared distance to the time-dependent noise-free solution ?(t), and the mean-squared size of
the fluctuations about the centroid w
? 2 . It is clear that the error rapidly drops off when ?(t) increases,
confirming the apparent reduction in the variance of w(t) in the top plot.
A second experiment, described by the right-hand stack of plots in Figure 1, shows how synchronization can function to adjust regularization over time. This simulation is inspired by the experimental
study of noise correlations in cortical area MT due to [10], where it was suggested that time-varying
correlations between pairs of neurons play a significant role in explaining behavioral variation in
smooth-pursuit eye movements. In particular, the findings in [10] and [4] suggest that short-term
increases in noise correlations are likely to occur after feedback arrives and neurons within and upstream from MT synchronize. We simulated a collection of correlated observation noise processes
obeying (12) (? = 10?3 , ? = 3) with all-to-all topology and uniform coupling strength ?z (t) increasing from 0 to 2 along the profile shown in Figure 1 (top-right plot, labeled gray trace). This
noise process Zt was then fed to a population of n = 5 units obeying (3a), with ambient noise ? = 1
and all-to-all coupling at fixed strength Wij = ? = 2. New data x, y and initial conditions were
chosen as in the previous experiment. The middle plot on the right-hand side shows the effect of
increasing synchronization among the observation noise processes. As the coupling increases, the
noise becomes more correlated and regularization decreases. This in turn causes the desired solution
?(t) to the regression problem to increase in magnitude (labeled gray trace). With decreased regularization, the ambient noise is more pronounced. The bottom-right plot shows the mean fluctuation
size and distance to the noise-free solution (total error). An increase in the noise variance is apparent
following the increase in observational noise correlation.
2
These quantities are similar to those defined in (10), but represent only this single simulation ? not in
expectation. Here, ergodic theory allows one to (very roughly) infer ensemble averages by visually estimating
time averages.
8
Acknowledgments
The authors are grateful to Rodolfo Llinas for pointing out the plausible analogy between gradient
search in adaptive optics and learning mechanisms in the brain. JB was supported under DARPA
FA8650-11-1-7150 SUB#7-3130298, NSF IIS-08-03293 and WA State U. SUB#113054 G002745.
References
[1] C. M. Bishop. Training with noise is equivalent to Tikhonov regularization. Neural Computation,
7(1):108?116, 1995.
[2] O. Bousquet and A. Elisseeff. Stability and generalization. J. Mach. Learn. Res., 2(3):499?526, 2002.
[3] J. Bouvrie and J.-J. Slotine. Synchronization and redundancy: Implications for robustness of neural
learning and decision making. Neural Computation, 23(11):2915?2941, 2011.
[4] S. C. de Oliveira, A. Thiele, and K. P. Hoffmann. Synchronization of neuronal activity during stimulus
expectation in a direction discrimination task. J Neurosci., 17(23):9248?60, 1997.
[5] H. W. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer, 1996.
[6] A. Faisal, L. Selen, and D. Wolpert. Noise in the nervous system. Nat. Rev. Neurosci., 9:292?303, April
2008.
[7] T. J. Gawne and B. J. Richmond. How independent are the messages carried by adjacent inferior temporal
cortical neurons? J Neurosci., 13(7):2758?71, 1993.
[8] Y. Gu, S. Liu, C. R. Fetsch, Y. Yang, S. Fok, A. Sunkara, G. C. DeAngelis, and D.E. Angelaki. Perceptual
learning reduces interneuronal correlations in macaque visual cortex. Neuron, 71(4):750 ? 761, 2011.
[9] T. D. Hanks, M. E. Mazurek, R. Kiani, E. Hopp, and M. N. Shadlen. Elapsed decision time affects the
weighting of prior probability in a perceptual decision task. J. Neurosci., 31(17):6339?52, 2011.
[10] X. Huang and S. G. Lisberger. Noise correlations in cortical area MT and their potential impact on
trial-by-trial variation in the direction and speed of smooth-pursuit eye movements. J. Neurophysiol,
101:3012?3030, 2009.
[11] O. Kallenberg. Foundations of Modern Probability. Springer, 2002.
[12] R. Kiani and M. N. Shadlen. Representation of confidence associated with a decision by neurons in the
parietal cortex. Science, 324(5928):759?764, 2009.
[13] T. Kinard, G. De Vries, A. Sherman, and L. Satin. Modulation of the bursting properties of single mouse
pancreatic ?-cells by artificial conductances. Biophysical Journal, 76(3):1423?1435, 1999.
[14] K. P. K?ording and D. M. Wolpert. Bayesian decision theory in sensorimotor control. Trends in Cognitive
Sciences, 10(7):319?326, 2006.
[15] H. J. Kushner and G. Yin. Stochastic Approximation and Recursive Algorithms and Applications.
Springer, 2nd edition, 2003.
[16] M. Mesbahi and M. Egerstedt. Graph Theoretic Methods in Multiagent Networks. Princeton U. Press,
2010.
[17] D. J. Needleman, P. H. Tiesinga, and T. J. Sejnowski. Collective enhancement of precision in networks of
coupled oscillators. Physica D: Nonlinear Phenomena, 155(3-4):324?336, 2001.
[18] E. Pardoux and A. Yu. Veretennikov. On the Poisson equation and diffusion approximation. I. Annals of
Probability, 29(3):1061?1085, 2001.
[19] Q.-C. Pham, N. Tabareau, and J.-J. Slotine. A contraction theory approach to stochastic incremental
stability. IEEE Transactions on Automatic Control, 54(4):816?820, April 2009.
[20] T. Poggio and S. Smale. The mathematics of learning: dealing with data. Notices Amer. Math. Soc.,
50(5):537?544, 2003.
[21] R. P. Rao and D. H. Ballard. Predictive coding in the visual cortex: A functional interpretation of some
extra-classical receptive-field effects. Nat. Neurosci., 2:79?87, 1999.
[22] A. Schnitzler and J. Gross. Normal and pathological oscillatory communication in the brain. Nature
Reviews Neuroscience, 6:285?296, 2005.
[23] A. Sherman and J. Rinzel. Model for synchronization of pancreatic beta-cells by gap junction coupling.
Biophysical Journal, 59(3):547?559, 1991.
[24] M. A. Smith and A. Kohn. Spatial and temporal scales of neuronal correlation in primary visual cortex. J
Neurosci., 28(48):12591?12603, 2008.
[25] J.C. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control, 37:332?341, 1992.
[26] N. Tabareau, J.-J. Slotine, and Q.-C. Pham. How synchronization protects from noise. PLoS Comput Biol,
6(1):e1000637, Jan 2010.
[27] A. Taylor, M. Tinsley, F. Wang, Z. Huang, and K. Showalter. Dynamical quorum sensing and synchronization in large populations of chemical oscillators. Science, 323(5914):614?617, 2009.
[28] M. A. Vorontsov, G. W. Carhart, and J. C. Ricklin. Adaptive phase-distortion correction based on parallel
gradient-descent optimization. Opt. Lett., 22(12):907?909, Jun 1997.
[29] T. Yang and M. N. Shadlen. Probabilistic reasoning by neurons. Nature, 447(7148):1075?1080, 2007.
9
| 4527 |@word trial:2 middle:2 stronger:1 nd:1 open:1 grey:1 simulation:4 pancreatic:2 covariance:6 contraction:1 elisseeff:1 dramatic:1 sgd:1 tr:7 accommodate:1 moment:1 reduction:4 liu:1 initial:2 ording:1 dx:2 must:3 readily:1 written:1 additive:3 visible:1 wx:1 confirming:1 sdes:2 plot:9 drop:2 rinzel:1 discrimination:1 stationary:2 credence:1 nervous:6 inspection:1 beginning:1 realizing:1 core:1 short:2 smith:1 provides:5 math:2 contribute:2 matrix1:1 preference:1 sigmoidal:1 simpler:1 along:2 differential:2 beta:1 qualitative:2 pathway:2 fitting:1 behavioral:1 x0:4 roughly:2 themselves:2 nor:1 behavior:6 multi:2 brain:8 zti:6 inspired:1 globally:1 actual:1 considering:2 increasing:4 becomes:1 discover:1 mitigated:1 underlying:2 moreover:1 bounded:1 circuit:7 duo:1 null:1 what:2 sde:3 kind:1 interpreted:1 mass:1 finding:1 temporal:2 quantitative:1 exactly:3 k2:5 rm:1 control:8 unit:2 normally:1 interneuronal:1 before:5 positive:2 apparatus:1 limit:5 tends:1 accumulates:1 mach:1 fluctuation:7 solely:1 approximately:1 path:2 might:3 modulation:1 au:3 dut:2 bursting:1 suggests:3 limited:1 range:1 averaged:2 unique:1 acknowledgment:1 recursive:1 implement:2 definite:2 jan:1 area:4 coping:1 adapting:1 confidence:1 suggest:2 dbt:10 cannot:2 context:3 dzt:2 intercept:1 applying:1 equivalent:1 imposed:3 yt:1 center:1 straightforward:1 duration:1 ergodic:3 simplicity:2 immediately:1 rule:3 borrow:1 stability:3 population:7 classic:1 variation:4 annals:1 hierarchy:1 controlling:1 play:3 duke:2 homogeneous:1 hypothesis:4 origin:1 element:4 trend:1 approximated:1 particularly:1 mammalian:1 labeled:4 observed:1 role:6 bottom:2 wang:1 capture:5 connected:1 psgd:5 plo:1 decrease:6 movement:2 thiele:1 gross:1 intuition:1 environment:1 govern:1 complexity:2 ui:17 dynamic:14 ultimately:2 depend:2 solving:3 grateful:1 predictive:1 serve:1 upon:4 learner:1 gu:1 neurophysiol:1 resolved:1 darpa:1 distinct:1 fast:3 sejnowski:1 deangelis:1 artificial:2 aggregate:1 ise:1 jean:1 apparent:2 posed:3 plausible:4 supplementary:2 distortion:1 relax:1 ability:2 azt:2 timescale:1 noisy:8 confronted:1 interplay:1 eigenvalue:2 biophysical:2 analytical:1 propose:2 interaction:1 adaptation:2 rapidly:1 inducing:1 pronounced:1 ky:1 quantifiable:1 convergence:22 enhancement:2 mazurek:1 incremental:1 converges:3 help:1 illustrate:3 recurrent:1 coupling:24 depending:1 strong:1 soc:1 implemented:1 involves:1 implies:2 trading:1 come:1 lyapunov:2 direction:2 closely:3 stochastic:15 transient:5 observational:6 material:2 implementing:4 argued:1 hx:9 assign:1 fix:1 generalization:2 suffices:3 opt:1 biological:1 mathematically:1 strictly:1 physica:1 hold:2 pham:2 correction:1 considered:2 normal:1 visually:1 equilibrium:9 minu:1 pointing:1 smallest:1 omitted:2 purpose:1 tool:1 minimization:1 mit:1 gaussian:6 always:1 modified:1 rather:1 varying:6 broader:1 encode:1 focus:1 superimposed:1 richmond:1 centroid:2 sense:4 dependent:3 bt:4 wij:3 among:7 ill:2 impacting:1 development:1 animal:1 restores:1 integration:2 spatial:2 field:4 once:1 having:1 biology:1 kw:4 identical:1 yu:1 future:1 jb:1 stimulus:2 quantitatively:1 inherent:1 simplify:1 few:3 primarily:1 modern:1 pathological:1 spall:1 simultaneously:1 phase:1 n1:12 conductance:1 message:1 investigate:1 adjust:5 arrives:1 hg:1 implication:1 accurate:2 ambient:7 integral:1 capable:1 experience:2 poggio:1 orthogonal:1 indexed:1 euclidean:1 taylor:1 desired:1 re:1 causal:1 theoretical:2 instance:3 increased:3 modeling:2 earlier:1 rao:2 engl:1 calibrate:2 cost:4 organizational:1 deviation:1 entry:1 uniform:5 tiesinga:1 answer:1 corrupted:6 calibrated:2 fundamental:2 probabilistic:1 off:2 receiving:1 together:1 mouse:1 concrete:1 w1:2 squared:3 postulate:1 again:2 huang:2 possibly:2 cognitive:1 derivative:1 leading:1 potential:1 de:2 coding:1 matter:2 ornstein:1 depends:1 later:1 view:2 analyze:2 doing:1 sup:1 parallel:8 slope:1 hanke:1 minimize:1 square:2 accuracy:2 variance:10 efficiently:1 ensemble:1 yield:1 generalize:1 weak:1 bayesian:4 trajectory:8 confirmed:1 gawne:1 kykq:1 explain:1 oscillatory:1 reach:1 simultaneous:1 against:1 sensorimotor:1 slotine:4 conveys:1 naturally:1 associated:2 attributed:1 proof:2 sampled:1 adjusting:2 massachusetts:1 knowledge:1 ut:2 improves:1 ubiquitous:1 lim:1 ou:4 amplitude:2 actually:1 feed:1 higher:1 dt:17 quorum:2 improved:1 llinas:1 april:2 intercommunication:1 amer:1 generality:1 furthermore:2 governing:1 stage:1 lastly:1 hank:1 correlation:17 synchronizes:1 hand:3 nonlinear:2 lack:1 defines:1 perhaps:2 gray:3 facilitate:1 effect:8 calibrating:1 true:2 needleman:1 regularization:50 chemical:1 spatially:1 symmetric:5 laboratory:1 white:2 adjacent:1 during:1 inferior:1 steady:4 coincides:1 generalized:1 smear:1 theoretic:1 motion:5 reasoning:1 novel:1 common:5 ornsteinuhlenbeck:1 functional:5 mt:4 harnessed:2 exponentially:1 association:1 organism:4 interpretation:2 discussed:4 approximates:1 counteracts:1 homogenization:4 measurement:3 significant:2 kluwer:1 cambridge:1 imposing:1 rd:3 automatic:2 mathematics:2 sherman:2 cortex:7 longer:1 brownian:5 multivariate:3 showed:1 optimizing:1 inf:1 driven:1 manipulation:1 tikhonov:1 certain:1 yi:13 seen:1 minimum:1 greater:2 impose:1 aggregated:1 converge:1 ud:6 signal:4 u0:2 ii:1 multiple:1 infer:2 reduces:1 smooth:2 match:1 adapt:1 characterized:2 faster:2 long:2 estimating:1 controlled:1 impact:6 prediction:3 involving:1 regression:9 laplacian:3 circumstance:1 expectation:4 poisson:1 represent:3 adopting:1 uhlenbeck:1 faisal:1 cell:2 diffusive:3 receive:2 addition:2 want:1 separately:1 affecting:1 decreased:2 interval:1 source:4 w2:1 extra:1 subject:1 pooling:1 db:1 flow:3 spirit:1 near:1 presence:2 kick:1 yang:2 enough:1 wn:1 affect:3 topology:3 reduce:3 idea:2 tradeoff:1 translates:1 expression:1 kohn:1 penalty:1 fa8650:1 proceed:1 cause:1 generally:1 clear:4 oliveira:1 locally:1 induces:1 kiani:2 simplest:1 reduced:1 schnitzler:1 nsf:1 notice:1 jacques:1 arising:1 estimated:3 neuroscience:1 key:2 redundancy:5 drawn:1 neither:1 ce:2 diffusion:1 kallenberg:1 v1:1 graph:2 downstream:1 idealization:1 inverse:2 uncertainty:3 place:1 family:2 draw:1 decision:6 dy:2 hopp:1 entirely:2 layer:1 quadratic:5 activity:1 adapted:1 strength:11 optic:2 occur:1 mesbahi:1 protects:1 bousquet:1 generates:1 aspect:2 u1:6 argument:1 min:1 speed:2 rodolfo:1 department:1 according:2 combination:1 wxi:2 across:1 beneficial:1 y0:1 wi:1 appealing:1 evolves:1 making:2 happens:1 biologically:1 rev:1 intuitively:1 invariant:1 interference:1 taken:1 unregularized:1 neubauer:1 equation:7 previously:1 turn:3 mechanism:13 fetsch:1 needed:1 letting:1 tractable:1 fed:1 serf:1 studying:3 pursuit:2 decomposing:2 junction:1 appropriate:2 generic:1 appearing:2 dwt:7 encounter:1 robustness:1 original:2 denotes:2 top:3 include:2 assembly:1 kushner:1 plausibly:2 approximating:1 jake:1 classical:1 experiential:1 objective:3 question:3 quantity:2 hoffmann:1 receptive:1 primary:1 dependence:3 gradient:31 lends:1 subspace:2 distance:3 simulated:2 argue:1 considers:1 consensus:2 toward:1 assuming:2 modeled:4 relationship:2 minimizing:1 nc:1 equivalently:1 potentially:1 smale:1 bouvrie:2 trace:4 negative:1 implementation:1 collective:2 zt:28 observation:18 neuron:10 finite:4 descent:10 parietal:1 supporting:1 situation:2 extended:1 communication:3 precise:1 perturbation:10 stack:3 thm:1 posedness:1 introduced:2 pair:1 required:1 connection:3 redefining:1 concisely:1 learned:1 uncoupled:1 elapsed:1 established:1 macaque:1 address:1 able:2 suggested:1 dynamical:3 pattern:1 below:1 summarize:1 reliable:1 explanation:2 suitable:1 natural:1 regularized:4 synchronize:2 residual:1 representing:3 scheme:1 improve:4 technology:1 eye:2 carried:1 jun:1 coupled:11 text:1 prior:4 review:1 kf:2 determining:1 synchronization:20 loss:4 expect:1 highlight:1 multiagent:1 analogy:1 localized:2 foundation:1 incurred:1 degree:4 imposes:4 shadlen:3 corrupting:1 course:1 supported:1 last:1 free:8 copy:1 drastically:1 side:2 allow:1 understand:2 institute:1 fall:1 explaining:1 taking:2 characterizing:2 emerge:1 anesthetized:1 distributed:5 feedback:3 lett:1 cortical:3 world:1 sensory:6 forward:1 collection:6 adaptive:3 author:1 lz:18 cope:1 transaction:2 functionals:1 dealing:1 overfitting:1 reveals:1 assumed:1 conclude:1 xi:6 spectrum:1 continuous:3 search:1 quantifies:1 learn:2 ballard:2 nature:3 contributes:1 adheres:1 upstream:2 complex:1 diag:1 antisymmetric:1 main:2 neurosci:6 noise:100 arise:3 reconcile:1 n2:8 profile:1 verifies:1 allowed:1 complementary:1 angelaki:1 edition:1 neuronal:4 slow:1 precision:2 sub:2 theme:2 downhill:1 explicit:1 diffusively:2 kzt:1 wish:1 obeying:3 kxk2:8 perceptual:3 governed:1 comput:1 weighting:1 vorontsov:1 ito:1 theorem:8 z0:1 bishop:1 sensing:3 cease:1 survival:1 evidence:3 intrinsic:2 exists:1 effectively:1 magnitude:2 nat:2 vries:1 kx:3 gap:1 durham:1 suited:1 wolpert:2 yin:1 explore:1 likely:1 visual:6 conveniently:1 kxk:1 expressed:1 tracking:1 collectively:1 monotonic:1 springer:2 lisberger:1 minimizer:1 satisfies:3 determines:3 ma:1 jjs:1 viewed:3 towards:5 oscillator:2 shared:1 lipschitz:1 price:1 change:5 fw:3 reducing:2 averaging:5 wt:10 total:4 experimental:4 formally:1 latter:1 phenomenon:3 princeton:1 biol:1 correlated:10 |
3,897 | 4,528 | Classi?cation Calibration Dimension for
General Multiclass Losses
Harish G. Ramaswamy Shivani Agarwal
Department of Computer Science and Automation
Indian Institute of Science, Bangalore 560012, India
{harish gurup,shivani}@csa.iisc.ernet.in
Abstract
We study consistency properties of surrogate loss functions for general multiclass
classi?cation problems, de?ned by a general loss matrix. We extend the notion
of classi?cation calibration, which has been studied for binary and multiclass 0-1
classi?cation problems (and for certain other speci?c learning problems), to the
general multiclass setting, and derive necessary and suf?cient conditions for a
surrogate loss to be classi?cation calibrated with respect to a loss matrix in this
setting. We then introduce the notion of classi?cation calibration dimension of a
multiclass loss matrix, which measures the smallest ?size? of a prediction space
for which it is possible to design a convex surrogate that is classi?cation calibrated with respect to the loss matrix. We derive both upper and lower bounds on
this quantity, and use these results to analyze various loss matrices. In particular,
as one application, we provide a different route from the recent result of Duchi
et al. (2010) for analyzing the dif?culty of designing ?low-dimensional? convex
surrogates that are consistent with respect to pairwise subset ranking losses. We
anticipate the classi?cation calibration dimension may prove to be a useful tool in
the study and design of surrogate losses for general multiclass learning problems.
1
Introduction
There has been signi?cant interest and progress in recent years in understanding consistency of
learning methods for various ?nite-output learning problems, such as binary classi?cation, multiclass 0-1 classi?cation, and various forms of ranking and multi-label prediction problems [1?15].
Such ?nite-output problems can all be viewed as instances of a general multiclass learning problem,
whose structure is de?ned by a loss function, or equivalently, by a loss matrix. While the studies
above have contributed to the understanding of learning problems corresponding to certain forms
of loss matrices, a framework for analyzing consistency properties for a general multiclass learning
problem, de?ned by a general loss matrix, has remained elusive.
In this paper, we analyze consistency of surrogate losses for general multiclass learning problems,
building on the results of [3, 5?7] and others. We start in Section 2 with some background and
examples that will be used as running examples to illustrate concepts throughout the paper, and formalize the notion of classi?cation calibration with respect to a general loss matrix. In Section 3, we
derive both necessary and suf?cient conditions for classi?cation calibration with respect to general
multiclass losses; these are both of independent interest and useful in our later results. Section 4 introduces the notion of classi?cation calibration dimension of a loss matrix, a fundamental quantity
that measures the smallest ?size? of a prediction space for which it is possible to design a convex surrogate that is classi?cation calibrated with respect to the loss matrix. We derive both upper and lower
bounds on this quantity, and use these results to analyze various loss matrices. As one application,
in Section 5, we provide a different route from the recent result of Duchi et al. [10] for analyzing
the dif?culty of designing ?low-dimensional? convex surrogates that are consistent with respect to
certain pairwise subset ranking losses. We conclude in Section 6 with some future directions.
1
2
Preliminaries, Examples, and Background
Setup. We are given training examples (X1 , Y1 ), . . . , (Xm , Ym ) drawn i.i.d. from a distribution D
on X ? Y, where X is an instance space and Y = [n] = {1, . . . , n} is a ?nite set of class labels. We
are also given a ?nite set T = [k] = {1, . . . , k} of target labels in which predictions are to be made,
and a loss function ? : Y ? T ?[0, ?), where ?(y, t) denotes the loss incurred on predicting t ? T
when the label is y ? Y. In many common learning problems, T = Y, but in general, these could
be different (e.g. when there is an?abstain? option available to a classi?er, in which case k = n + 1).
We will ?nd it convenient to represent the loss function ? as a loss matrix L ? Rn?k
(here R+ =
+
[0, ?)), and for each y ? [n], t ? [k], will denote by ?yt the (y, t)-th element of L, ?yt = (L)yt =
?(y, t), and by ?t the t-th column of L, ?t = (?1t , . . . , ?nt )? ? Rn . Some examples follow:
Example 1 (0-1 loss). Here Y = T = [n], and the loss incurred is 1 if the predicted label t is
different from the actual class label y, and 0 otherwise: ?0-1 (y, t) = 1(t ?= y) , where 1(?) is 1 if the
argument is true and 0 otherwise. The loss matrix L0-1 for n = 3 is shown in Figure 1(a).
Example 2 (Ordinal regression loss). Here Y = T = [n], and predictions t farther away from the
actual class label y are penalized more heavily, e.g. using absolute distance: ?ord (y, t) = |t ? y| .
The loss matrix Lord for n = 3 is shown in Figure 1(b).
Example 3 (Hamming loss). Here Y = T = [2r ] for some r ? N, and the loss incurred on
predicting t when the actual class label is y is the number
?r of bit-positions in which the r-bit binary
representations of t ? 1 and y ? 1 differ: ?Ham (y, t) = i=1 1((t ? 1)i ?= (y ? 1)i ) , where for any
z ? {0, . . . , 2r ? 1}, zi ? {0, 1} denotes the i-th bit in the r-bit binary representation of z. The loss
matrix LHam for r = 2 is shown in Figure 1(c). This loss is used in sequence labeling tasks [16].
Example 4 (?Abstain? loss). Here Y = [n] and T = [n+1], where t = n+1 denotes ?abstain?. One
possible loss function in this setting assigns a loss of 1 to incorrect predictions in [n], 0 to correct
predictions, and 12 for abstaining: ?(?) (y, t) = 1(t ?= y) 1(t ? [n]) + 12 1(t = n + 1) . The loss
matrix L(?) for n = 3 is shown in Figure 1(d).
The goal in the above setting is to learn from the training examples a function h : X ?[k] with low
expected loss on a new example drawn from D, which we will refer to as the ?-risk of h:
n
?
?
er?D [h] = E(X,Y )?D ?(Y, h(X)) = EX
py (X)?(y, h(X)) = EX p(X)? ?h(X) ,
(1)
y=1
where py (x) = P(Y = y | X = x) under D, and p(x) = (p1 (x), . . . , pn (x))? ? Rn denotes the
conditional probability vector at x. In particular, the goal is to learn a function with ?-risk close to
the optimal ?-risk, de?ned as
?
er?,?
D =
inf
h:X ?[k]
er?D [h] =
inf
h:X ?[k]
EX p(X)? ?h(X) = EX min p(X)? ?t .
t?[k]
(2)
Minimizing the discrete ?-risk directly is typically dif?cult computationally; consequently, one usually employs a surrogate loss function ? : Y ? T? ?R+ operating on a surrogate target space
T? ? Rd for some appropriate d ? N,1 and minimizes (approximately, based on the training sample)
the ?-risk instead, de?ned for a (vector) function f : X ?T? as
n
?
?
er?
[f
]
=
E
?(Y,
f
(X))
=
E
py (X)?(y, f (X)) .
(3)
X
(X,Y )?D
D
y=1
The learned function f : X ?T? is then used to make predictions in [k] via some transformation pred :
T? ?[k]: the prediction on a new instance x ? X is given by pred(f (x)), and the ?-risk incurred is
er?D [pred ? f ]. As an example, several algorithms for multiclass classi?cation with respect to 0-1 loss
learn a function of the form f : X ?Rn and predict according to pred(f (x)) = argmaxt?[n] ft (x).
Below we will ?nd it useful to represent the surrogate loss function ? via n real-valued functions
?y : T? ?R+ de?ned as ?y (?t) = ?(y, ?t) for y ? [n], or equivalently, as a vector-valued function
? : T? ?Rn+ de?ned as ?(?t) = (?1 (?t), . . . , ?n (?t))? . We will also de?ne the sets
?
? ?
?
R? = ?(?t) : ?t ? T?
and S? = conv(R? ) ,
(4)
where for any A ? Rn , conv(A) denotes the convex hull of A.
1
? + , where R
? + = R+ ? {?} and ?(y, ?t) = ? ??t ?
Equivalently, one can de?ne ? : Y ? Rd ?R
/ T? .
2
?
0
1
1
1
0
1
1
1
0
?
?
0
1
2
(a)
1
0
1
2
1
0
?
?
0
? 1
? 1
2
(b)
1 1
0 2
2 0
1 1
(c)
?
2
1 ?
1 ?
0
?
0 1 1
? 1 0 1
1 1 0
1
2
1
2
1
2
(d)
?
?
Figure 1: Loss matrices corresponding to Examples 1-4: (a) L0-1 for n = 3; (b) Lord for n = 3; (c)
LHam for r = 2 (n = 4); (d) L(?) for n = 3.
Under suitable conditions, algorithms that approximately minimize the ?-risk based on a training
sample are known to be consistent with respect to the ?-risk, i.e. to converge (in probability) to the
optimal ?-risk, de?ned as
?
er?,?
D =
inf er?
D [f ] =
f :X ?T?
inf EX p(X)? ?(f (X)) = EX inf p(X)? z = EX inf p(X)? z .
z?R?
f :X ?T?
z?S?
(5)
This raises the natural question of whether, for a given loss ?, there are surrogate losses ? for which
consistency with respect to the ?-risk also guarantees consistency with respect to the ?-risk, i.e.
guarantees convergence (in probability) to the optimal ?-risk (de?ned in Eq. (2)). This question has
been studied in detail for the 0-1 loss, and for square losses of the form ?(y, t) = ay 1(t ?= y), which
can be analyzed similarly to the 0-1 loss [6, 7]. In this paper, we consider this question for general
multiclass losses ? : [n] ? [k]?R+ , including rectangular losses with k ?= n. The only assumption
we make on ? is that for each t ? [k], ?p ? ?n such that argmint? ?[k] p? ?t? = {t} (otherwise the
label t never needs to be predicted and can simply be ignored).2
De?nitions and Results. We will need the following de?nitions and basic results, generalizing
those of [5?7]. The notion of classi?cation calibration will be central to our study; as Theorem 3
below shows, classi?cation calibration of a surrogate loss ? w.r.t. ? corresponds to the property that
consistency w.r.t. ?-risk implies consistency w.r.t. ?-risk. Proofs of these results are straightforward
generalizations of those in [6, 7] and are omitted.
De?nition 1 (Classi?cation calibration). A surrogate loss function ? : [n] ? T? ?R+ is said to be
classi?cation calibrated with respect to a loss function ? : [n] ? [k]?R+ over P ? ?n if there exists
a function pred : T? ?[k] such that
p? ?(?t) > inf p? ?(?t) .
?p ? P :
inf
?
t?T?
?
?
t?T? :pred(?
t)?argmin
/
t p ?t
Lemma 2. Let ? : [n] ? [k]?R+ and ? : [n] ? T? ?R+ . Then ? is classi?cation calibrated with
respect to ? over P ? ?n iff there exists a function pred? : S? ?[k] such that
?p ? P :
inf
?
z?S? :pred? (z)?argmin
/
t p ?t
p? z >
inf p? z .
z?S?
Theorem 3. Let ? : [n] ? [k]?R+ and ? : [n] ? T? ?R+ . Then ? is classi?cation calibrated with
respect to ? over ?n iff ? a function pred : T? ?[k] such that for all distributions D on X ? [n] and
all sequences of random (vector) functions fm : X ?T? (depending on (X1 , Y1 ), . . . , (Xm , Ym )),3
P
P
er?
? er?,?
D [fm ] ?
D
implies er?D [pred ? fm ] ?
? er?,?
D .
?
De?nition 4 (Positive normals). Let ? : [n] ? T ?R+ . For each point z ? S? , the set of positive
normals at z is de?ned as4
?
? ?
NS? (z) = p ? ?n : p? (z ? z? ) ? 0 ?z? ? S? .
De?nition 5 (Trigger probabilities). Let ? : [n] ? [k]?R+ . For each t ? [k], the set of trigger
probabilities of t with respect to ? is de?ned as
?
?
?
? ?
Q?t = p ? ?n : p? (?t ? ?t? ) ? 0 ?t? ? [k] = p ? ?n : t ? argmint? ?[k] p? ?t? .
Examples of trigger probability sets for various losses are shown in Figure 2.
2
Here ?n denotes the probability simplex in Rn , ?n = {p ? Rn : pi ? 0 ? i ? [n],
P
3
Here ?
? denotes convergence in probability.
4
The set of positive normals is non-empty only at points z in the boundary of S? .
3
?n
i=1
pi = 1}.
Q10-1 = {p ? ?3 : p1 ? max(p2 , p3 )} Qord
= {p ? ?3 : p1 ?
1
= {p ? ?3 : p2 ? max(p1 , p3 )} Qord
Q0-1
= {p ? ?3 : p1 ?
2
2
= {p ? ?3 : p3 ? max(p1 , p2 )} Qord
Q0-1
= {p ? ?3 : p3 ?
3
3
1
2}
1
2 , p3
1
2}
?
(a)
1
2}
(?)
Q1
(?)
Q2
(?)
Q3
(?)
Q4
= {p ? ?3 : p1 ?
= {p ? ?3 : p2 ?
= {p ? ?3 : p3 ?
1
2}
1
2}
1
2}
= {p ? ?3 : max(p1 , p2 , p3 ) ?
1
2}
(b)
(c)
Figure 2: Trigger probability sets for (a) 0-1 loss ?0-1 ; (b) ordinal regression loss ?ord ; and (c) ?abstain? loss ?(?) ; all for n = 3, for which the probability simplex can be visualized easily. Calculations
of these sets can be found in the appendix. We note that such sets have also been studied in [17, 18].
3
Necessary and Suf?cient Conditions for Classi?cation Calibration
We start by giving a necessary condition for classi?cation calibration of a surrogate loss ? with
respect to any multiclass loss ? over ?n , which requires the positive normals of all points z ? S? to
be ?well-behaved? w.r.t. ? and generalizes the ?admissibility? condition used for 0-1 loss in [7]. All
proofs not included in the main text can be found in the appendix.
Theorem 6. Let ? : [n] ? T? ?R+ be classi?cation calibrated with respect to ? : [n] ? [k]?R+
over ?n . Then for all z ? S? , there exists some t ? [k] such that NS? (z) ? Q?t .
We note that, as in [7], it is possible to give a necessary and suf?cient condition for classi?cation
calibration in terms of a similar property holding for positive normals associated with projections of
S? in lower dimensions. Instead, below we give a different suf?cient condition that will be helpful
in showing classi?cation calibration of certain surrogates. In particular, we show that for a surrogate
loss ? to be classi?cation calibrated with respect to ? over ?n , it is suf?cient for the above property
of positive normals to hold only at a ?nite number of points in R? , as long as their positive normal
sets jointly cover ?n :
?
Theorem 7. Let
?r? : [n]?[k]?R+ and ? : [n]? T ?R+ . Suppose there exist r ? N and z1 , . . . , zr ?
R? such that j=1 NS? (zj ) = ?n and for each j ? [r], ?t ? [k] such that NS? (zj ) ? Q?t . Then
? is classi?cation calibrated with respect to ? over ?n .
Computation of NS? (z). The conditions in the above results both involve the sets of positive
normals NS? (z) at various points z ? S? . Thus in order to use the above results to show that a
surrogate ? is (or is not) classi?cation calibrated with respect to a loss ?, one needs to be able to
compute or characterize the sets NS? (z). Here we give a method for computing these sets for certain
surrogate losses ? and points z ? S? .
Lemma 8. Let T? ? Rd be a convex set and let ? : T? ?Rn+ be convex.5 Let z = ?(?t) for some
?t ? T? such that for each y ? [n], the subdifferential of ?y at ?t can be written as ??y (?t) =
?n
conv({w1y , . . . , wsyy }) for some sy ? N and w1y , . . . , wsyy ? Rd .6 Let s = y=1 sy , and let
?
?
B = [byj ] ? Rn?s ,
A = w11 . . . ws11 w12 . . . ws22 . . . . . . w1n . . . wsnn ? Rd?s ;
where byj is 1 if the j-th column of A came from {w1y , . . . , wsyy } and 0 otherwise. Then
?
?
NS? (z) = p ? ?n : p = Bq for some q ? Null(A) ? ?s ,
where Null(A) ? Rs denotes the null space of the matrix A.
5
A vector function is convex if all its component functions are convex.
? + at a point u0 ? Rd is de?ned as
Recall ?that the subdifferential of a convex function ? : Rd ??R
??(u0 ) = w ? Rd : ?(u) ? ?(u0 ) ? w? (u ? u0 ) ?u ? Rd and is a convex set in Rd (e.g. see [19]).
6
4
We give an example illustrating the use of Theorem 7 and Lemma 8 to show classi?cation calibration
of a certain surrogate loss with respect to the ordinal regression loss ?ord de?ned in Example 2:
Example 5 (Classi?cation calibrated surrogate for ordinal regression loss). Consider the ordinal
regression loss ?ord de?ned in Example 2 for n = 3. Let T? = R, and let ? : {1, 2, 3} ? R?R+ be
de?ned as (see Figure 3)
?(y, t?) = |t? ? y|
?y ? {1, 2, 3}, t? ? R .
(6)
?
?
??
?
?
?
?
?
?
Thus R? = ?(t) = |t ? 1|, |t ? 2|, |t ? 3| : t ? R . We will show there are 3 points in R?
satisfying the conditions of Theorem 7. Speci?cally, consider t?1 = 1, t?2 = 2, and t?3 = 3, giving
z1 = ?(t?1 ) = (0, 1, 2)? , z2 = ?(t?2 ) = (1, 0, 1)? , and z3 = ?(t?3 ) = (2, 1, 0)? in R? . Observe
that T? here is a convex set and ? : T? ?R3 is a convex function. Moreover, for t?1 = 1, we have
??1 (1)
??2 (1)
??3 (1)
=
=
=
[?1, 1] = conv({+1, ?1}) ;
{?1} = conv({?1}) ;
{?1} = conv({?1}) .
Therefore, we can use Lemma 8 to compute NS? (z1 ). Here
s = 4, and
?
?
1 1 0 0
A = [ +1 ?1 ?1 ?1 ] ;
B= 0 0 1 0 .
0 0 0 1
This gives
NS? (z1 )
=
=
=
Figure 3: The surrogate ?
?
p ? ?3 : p = (q1 + q2 , q3 , q4 ) for some q ? ?4 , q1 ? q2 ? q3 ? q4 = 0
?
?
p ? ?3 : p = (q1 + q2 , q3 , q4 ) for some q ? ?4 , q1 = 12
?
?
p ? ?3 : p1 ? 12
?
= Qord
1 .
ord
A similar procedure yields NS? (z2 ) = Qord
2 and NS? (z3 ) = Q3 . Thus, by Theorem 7, we get that
ord
? is classi?cation calibrated with respect to ? over ?3 .
We note that in general, computational procedures such as Fourier-Motzkin elimination [20] can be
helpful in computing NS? (z) via Lemma 8.
4
Classi?cation Calibration Dimension
We now turn to the study of a fundamental quantity associated with the property of classi?cation
calibration with respect to a general multiclass loss ?. Speci?cally, in the above example, we saw
that to develop a classi?cation calibrated surrogate loss w.r.t. the ordinal regression loss for n = 3,
it was suf?cient to consider a surrogate target space T? = R, with dimension d = 1; in addition, this
yielded a convex surrogate ? : R?R3+ which can be used in developing computationally ef?cient
algorithms. In fact the same surrogate target space with d = 1 can be used to develop a similar
convex, classi?cation calibrated surrogate loss w.r.t. the ordinal regression loss for any n ? N.
However not all losses ? have such ?low-dimensional? surrogates. This raises the natural question
of what is the smallest dimension d that supports a convex classi?cation calibrated surrogate for a
given multiclass loss ?, and leads us to the following de?nition:
De?nition 9 (Classi?cation calibration dimension). Let ? : [n] ? [k]?R+ . De?ne the classi?cation
calibration dimension (CC dimension) of ? as
?
?
CCdim(?) = min d ? N : ? a convex set T? ? Rd and a convex surrogate ? : T? ?Rn+
?
that is classi?cation calibrated w.r.t. ? over ?n ,
if the above set is non-empty, and CCdim(?) = ? otherwise.
From the above discussion, CCdim(?ord ) = 1 for all n. In the following, we will be interested in
developing an understanding of the CC dimension for general losses ?, and in particular in deriving
upper and lower bounds on this.
5
4.1
Upper Bounds on the Classi?cation Calibration Dimension
We start with a simple result that establishes that the CC dimension of any multiclass loss ? is ?nite,
and in fact is strictly smaller than the number of class labels n.
?
?
?n?1
Lemma 10. Let ? : [n] ? [k]?R+ . Let T? = ?t ? Rn?1
: j=1 t?j ? 1 , and for each y ? [n], let
+
?y : T? ?R+ be given by
?
?y (?t) = 1(y ?= n) (t?y ? 1)2 +
t?j 2 .
j?[n?1],j?=y
Then ? is classi?cation calibrated with respect to ? over ?n . In particular, since ? is convex,
CCdim(?) ? n ? 1.
It may appear surprising that the convex surrogate ? in the above lemma is classi?cation calibrated
with respect to all multiclass losses ? on n classes. However this makes intuitive sense, since in
principle, for any multiclass problem, if one can estimate the conditional probabilities of the n
classes accurately (which requires estimating n?1 real-valued functions on X ), then one can predict
a target label that minimizes the expected loss according to these probabilities. Minimizing the above
surrogate effectively corresponds to such class probability estimation. Indeed, the above lemma can
be shown to hold for any surrogate that is a strictly proper composite multiclass loss [21].
In practice, when the number of class labels n is large (such as in a sequence labeling task, where n
is exponential in the length of the input sequence), the above result is not very helpful; in such cases,
it is of interest to develop algorithms operating on a surrogate target space in a lower-dimensional
space. Next we give a different upper bound on the CC dimension that depends on the loss ?, and
for certain losses, can be signi?cantly tighter than the general bound above.
Theorem 11. Let ? : [n] ? [k]?R+ . Then CCdim(?) ? rank(L), the rank of the loss matrix L.
Proof. Let rank(L) = d. We will construct a convex classi?cation calibrated surrogate loss ? for ?
with surrogate target space T? ? Rd .
Let ?t1 , . . . , ?td be linearly independent columns of L. Let {e1 , . . . , ed } denote the standard basis
? : Rd ?Rn by
in Rd . We can de?ne a linear function ?
? j ) = ?t ?j ? [d] .
?(e
j
?
= z.
Then for each z in the column space of L, there exists a unique vector u ? Rd such that ?(u)
? t ) = ?t .
In particular, there exist unique vectors u1 , . . . , uk ? Rd such that for each t ? [k], ?(u
Let T? = conv({u1 , . . . , uk }), and de?ne ? : T? ?Rn+ as
? ?t) ;
?(?t) = ?(
?k
we note that the resulting vectors are always in Rn+ , since by de?nition, for any ?t = t=1 ?t ut for
?k
? ? ?k , ?(?t) = t=1 ?t ?t , and ?t ? Rn+ ?t ? [k]. The function ? is clearly convex. To show ? is
classi?cation calibrated w.r.t. ? over ?n , we will use Theorem 7. Speci?cally, consider the k points
zt = ?(ut ) = ?t ? R? for t ? [k]. By de?nition of ?, we have S? = conv({?1 , . . . , ?k }); from the
de?nitions of positive normals and trigger probabilities, it then follows that NS? (zt ) = NS? (?t ) =
Q?t for all t ? [k]. Thus by Theorem 7, ? is classi?cation calibrated w.r.t. ? over ?n .
Example 6 (CC dimension of Hamming loss). Consider the Hamming loss ?Ham de?ned in Example
3, for n = 2r . For each i ? [r], de?ne ? i ? Rn as
?
+1
if (y ? 1)i , the i-th bit in the r-bit binary representation of (y ? 1), is 1
?iy =
?1
otherwise.
Then the loss matrix LHam satis?es
r
LHam =
r ? 1?
ee ?
?i ?i ? ,
2
2 i=1
where e is the n ? 1 all ones vector. Thus rank(LHam ) ? r + 1, giving us CCdim(?Ham ) ? r + 1.
For r ? 3, this is a signi?cantly tighter upper bound than the bound of 2r ? 1 given by Lemma 10.
6
We note that the upper bound of Theorem 11 need not always be tight: for example, for the ordinal
regression loss, for which we already know CCdim(?ord ) = 1, the theorem actually gives an upper
bound of n, which is even weaker than that implied by Lemma 10.
4.2
Lower Bound on the Classi?cation Calibration Dimension
In this section we give a lower bound on the CC dimension of a loss function ? and illustrate it by
using it to calculate the CC dimension of the 0-1 loss. Section 5 we will explore consequences of
the lower bound for classi?cation calibrated surrogates for certain types of ranking losses. We will
need the following de?nition:
De?nition 12. The feasible subspace dimension of a convex set C at p ? C, denoted by ?C (p), is
de?ned as the dimension of the subspace FC (p) ? (?FC (p)), where FC (p) is the cone of feasible
directions of C at p.7
The following gives a lower bound on the CC dimension of a loss ? in terms of the feasible subspace
dimension of the trigger probability sets Q?t at certain points p ? Q?t :
Theorem 13. Let ? : [n] ? [k]?R+ . Then for all p ? relint(?n ) and t ? arg mint? p? ?t? (i.e. such
that p ? Q?t ): 8
CCdim(?) ? n ? ?Q?t (p) ? 1 .
The proof requires extensions of the de?nition of positive normals and the necessary condition of
Theorem 6 to sequences of points in S? and is quite technical. In the appendix, we provide a proof
in the special case when p ? relint(?n ) is such that inf z?S? p? z is achieved in S? , which does not
require these extensions. Full proof details will be provided in a longer version of the paper. Both
the proof of the lower bound and its applications make use of the following lemma, which gives a
method to calculate the feasible subspace dimension for certain convex sets C and points p ? C:
?
?
Lemma 14. Let C = u ? Rn : A1 u ? b1 , A2 u ? b2 , A3 u = b3 . Let p ? C be such that
? 1?
?? 1 ??
A
A
, the dimension of the null space of
.
A1 p = b1 , A2 p < b2 . Then ?C (p) = nullity
3
A
A3
The above lower bound allows us to calculate precisely the CC dimension of the 0-1 loss:
Example 7 (CC dimension of 0-1 loss). Consider the 0-1 loss ?0-1 de?ned in Example 1. Take
p = ( n1 , . . . , n1 )? ? relint(?n ). Then p ? Q0-1
t for all t ? [k] = [n] (see Figure 2); in particular,
0-1
we have p ? Q0-1
.
Now
Q
can
be
written
as
1
1
?
?
0-1
Q1
=
q ? ?n : q1 ? qy ?y ? {2, . . . , n}
?
?
?
=
q ? Rn : ?en?1 In?1 q ? 0, ?q ? 0, e?
n q = 1} ,
where en?1 , en denote the (n ? 1) ? 1 and n ? 1 all ones
and In?1 denotes
?
? vectors, respectively,
the (n ? 1) ? (n ? 1) identity matrix. Moreover, we have ?en?1 In?1 p = 0, ?p < 0. Therefore,
by Lemma 14, we have
??
??
?1 1 0 . . . 0
???1 0 1 . . . 0??
??
??
??
??
?en?1 In?1
..
?
?? = 0 .
?Q0-1 (p) = nullity
= nullity ?
?
.
?
?
??
1
en
???1 0 0 . . . 1??
1 1 1 ... 1
Thus by Theorem 13, we get CCdim(?0-1 ) ? n ? 1. Combined with the upper bound of Lemma 10,
this gives CCdim(?0-1 ) = n ? 1.
7
For a set C ? Rn and point p ? C, the cone of feasible directions of C at p is de?ned as
FA (p) = {v ? Rn : ??0 > 0 such that p + ?v ? C ?? ? (0, ?0 )}.
8
Here relint(?n ) denotes the relative interior of ?n : relint(?n ) = {p ? ?n : py > 0 ?y ? [n]}.
7
5
Application to Pairwise Subset Ranking
We consider an application of the above framework to analyzing certain types of subset ranking
problems, where each instance x ? X consists of a query together with a set of r documents (for
simplicity, r ? N here is ?xed), and the goal is to learn a predictor which given such an instance will
return a ranking (permutation) of the r documents [8]. Duchi et al. [10] showed recently that for
certain pairwise subset ranking losses ?, ?nding a predictor that minimizes the ?-risk is an NP-hard
problem. They also showed that several common pairwise convex surrogate losses that operate on
T? = Rr (and are used to learn scores for the r documents) fail to be classi?cation calibrated with
respect to such losses ?, even under some low-noise conditions on the distribution, and proposed
an alternative convex surrogate, also operating on T? = Rr , that is classi?cation calibrated under
certain conditions on the distribution (i.e. over a strict subset of the associated probability simplex).
Here we provide an alternative route to analyzing the dif?culty of obtaining consistent surrogates for
such pairwise subset ranking problems using the classi?cation calibration dimension. Speci?cally,
we will show that even for a simple setting of such problems, the classi?cation calibration dimension
of the underlying loss ? is greater than r, and therefore no convex surrogate operating on T? ? Rr
can be classi?cation calibrated w.r.t. such a loss over the full probability simplex.
Formally, we will identify the set of class labels Y with a set G of ?preference graphs?, which are
simply directed acyclic graphs (DAGs) over r vertices; for each directed edge (i, j) in a preference
graph g ? G associated with an instance x ? X , the i-th document in the document set in x is
preferred over the j-th document. Here we will consider a simple setting where each preference
graph has exactly one edge, so that |Y| = |G| = r(r ? 1); in this setting, we can associate each
g ? G with the edge (i, j) it contains, which we will write as g(i,j) . The target labels consist of
permutations over r objects, so that T = Sr with |T | = r!. Consider now the following simple
pairwise loss ?pair : Y ? T ?R+ :
?
?
?pair (g(i,j) , ?) = 1 ?(i) > ?(j) .
(7)
1
1
Let p = ( r(r?1)
, . . . , r(r?1)
)? ? relint(?r(r?1) ), and observe that p? ?pair
=
?
Thus p
?
(?pair
?
?
?pair
?? )
= 0 ??, ? ? T , and so p ?
?
Qpair
?
?? ? T .
1
2
for all ? ? T .
Let (?1 , . . . , ?r! ) be any ?xed ordering of the permutations in T , and consider Qpair
?1 , de?ned by
pair
?
?
)
?
0
for
t
=
2, . . . , r! and
the intersection of r! ? 1 half-spaces of the form q? (?pair
?1
?t
the simplex constraints q ? ?r(r?1) . Moreover, from the above observation, p ? Qpair
?1 satis?es
pair
?
?
)
=
0
?t
=
2,
.
.
.
,
r!.
Therefore,
by
Lemma
14,
we
get
p? (?pair
?1
?t
??
?? ?
pair
pair
pair
?Qpair
(p) = nullity (?pair
,
?1 ? ??2 ), . . . , (??1 ? ??r! ), e
?
1
(8)
T } spans a
where e is the r(r ? 1) ? 1 all ones vector. It is not hard to see that the set {?pair
? : ? ?
? r(r?1) ?
r(r?1)
dimensional space, and hence the nullity of the above matrix is at most r(r?1)?
?1 .
2
2
?
?
r(r?1)
+
1
?
1
=
?
2
. In
Thus by Theorem 13, we get that CCdim(?pair ) ? r(r ? 1) ? r(r?1)
2
2
pair
particular, for r ? 5, this gives CCdim(? ) > r, and therefore establishes that no convex surrogate
? operating on a surrogate target space T? ? Rr can be classi?cation calibrated with respect to ?pair
on the full probability simplex ?r(r?1) .
6
Conclusion
We developed a framework for analyzing consistency for general multiclass learning problems de?ned by a general loss matrix, introduced the notion of classi?cation calibration dimension of a
multiclass loss, and used this to analyze consistency properties of surrogate losses for various general multiclass problems. An interesting direction would be to develop a generic procedure for
designing consistent convex surrogates operating on a ?minimal? surrogate target space according to
the classi?cation calibration dimension of the loss matrix. It would also be of interest to extend the
results here to account for noise conditions as in [9, 10].
8
Acknowledgments
We would like to thank the anonymous reviewers for helpful comments. HG thanks Microsoft
Research India for a travel grant. This research is supported in part by a Ramanujan Fellowship to
SA from DST and an Indo-US Joint Center Award from the Indo-US Science & Technology Forum.
References
[1] G?abor Lugosi and Nicolas Vayatis. On the bayes-risk consistency of regularized boosting
methods. Annals of Statistics, 32(1):30?55, 2004.
[2] Wenxin Jiang. Process consistency for AdaBoost. Annals of Statistics, 32(1):13?29, 2004.
[3] Tong Zhang. Statistical behavior and consistency of classi?cation methods based on convex
risk minimization. Annals of Statistics, 32(1):56?134, 2004.
[4] Ingo Steinwart. Consistency of support vector machines and other regularized kernel classi?ers. IEEE Transactions on Information Theory, 51(1):128?142, 2005.
[5] Peter Bartlett, Michael Jordan, and Jon McAuliffe. Convexity, classi?cation and risk bounds.
Journal of the American Statistical Association, 101(473):138?156, 2006.
[6] Tong Zhang. Statistical analysis of some multi-category large margin classi?cation methods.
Journal of Machine Learning Research, 5:1225?1251, 2004.
[7] Ambuj Tewari and Peter Bartlett. On the consistency of multiclass classi?cation methods.
Journal of Machine Learning Research, 8:1007?1025, 2007.
[8] David Cossock and Tong Zhang. Statistical analysis of bayes optimal subset ranking. IEEE
Transactions on Information Theory, 54(11):5140?5154, 2008.
[9] Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. Listwise approach to learning
to rank: Theory and algorithm. In International Conference on Machine Learning, 2008.
[10] John Duchi, Lester Mackey, and Michael Jordan. On the consistency of ranking algorithms. In
International Conference on Machine Learning, 2010.
[11] Pradeep Ravikumar, Ambuj Tewari, and Eunho Yang. On NDCG consistency of listwise ranking methods. In International Conference on Arti?cial Intelligence and Statistics(AISTATS),
volume 15. JMLR: W&CP, 2011.
[12] David Buffoni, Cl?ement Calauz`enes, Patrick Gallinari, and Nicolas Usunier. Learning scoring
functions with order-preserving losses and standardized supervision. In International Conference on Machine Learning, 2011.
[13] Wei Gao and Zhi-Hua Zhou. On the consistency of multi-label learning. In Conference on
Learning Theory, 2011.
[14] Wojciech Kotlowski, Krzysztof Dembczynski, and Eyke Huellermeier. Bipartite ranking
through minimization of univariate loss. In International Conference on Machine Learning,
2011.
[15] Ingo Steinwart. How to compare different loss functions and their risks. Constructive Approximation, 26:225?287, 2007.
[16] Ben Taskar, Carlos Guestrin, and Daphne Koller. Max-margin Markov networks. In Neural
Information Processing Systems, 2003.
[17] Deirdre O?Brien, Maya Gupta, and Robert Gray. Cost-sensitive multi-class classi?cation from
probability estimates. In International Conference on Machine Learning, 2008.
[18] Nicolas Lambert and Yoav Shoham. Eliciting truthful answers to multiple-choice questions.
In ACM Conference on Electronic Commerce, 2009.
[19] Dimitri Bertsekas, Angelia Nedic, and Asuman Ozdaglar. Convex Analysis and Optimization.
Athena Scienti?c, 2003.
[20] Jean Gallier. Notes on convex sets, polytopes, polyhedra, combinatorial topology, Voronoi
diagrams and Delaunay triangulations. Technical report, Department of Computer and Information Science, University of Pennsylvania, 2009.
[21] Elodie Vernet, Robert C. Williamson, and Mark D. Reid. Composite multiclass losses. In
Neural Information Processing Systems, 2011.
9
| 4528 |@word wenxin:1 illustrating:1 version:1 nd:2 r:1 q1:7 arti:1 liu:1 contains:1 score:1 document:6 brien:1 z2:2 nt:1 surprising:1 written:2 john:1 cant:1 mackey:1 half:1 intelligence:1 cult:1 farther:1 boosting:1 preference:3 zhang:4 daphne:1 incorrect:1 prove:1 consists:1 introduce:1 pairwise:7 indeed:1 expected:2 behavior:1 p1:9 multi:4 deirdre:1 td:1 zhi:1 actual:3 conv:8 iisc:1 estimating:1 moreover:3 provided:1 underlying:1 null:4 what:1 xed:2 argmin:2 minimizes:3 q2:4 developed:1 transformation:1 guarantee:2 cial:1 tie:1 exactly:1 uk:2 lester:1 gallinari:1 grant:1 ozdaglar:1 appear:1 mcauliffe:1 bertsekas:1 positive:10 t1:1 reid:1 consequence:1 analyzing:6 jiang:1 approximately:2 lugosi:1 ndcg:1 studied:3 dif:4 directed:2 unique:2 acknowledgment:1 commerce:1 ement:1 practice:1 procedure:3 nite:6 yan:1 shoham:1 composite:2 convenient:1 projection:1 get:4 close:1 interior:1 risk:19 py:4 reviewer:1 yt:3 ramanujan:1 elusive:1 straightforward:1 center:1 convex:32 rectangular:1 simplicity:1 assigns:1 deriving:1 notion:6 wensheng:1 annals:3 target:10 trigger:6 heavily:1 suppose:1 designing:3 associate:1 element:1 satisfying:1 nitions:3 ft:1 taskar:1 wang:1 calculate:3 ordering:1 ham:3 convexity:1 raise:2 tight:1 lord:2 bipartite:1 basis:1 easily:1 joint:1 various:7 query:1 labeling:2 whose:1 quite:1 jean:1 valued:3 otherwise:6 statistic:4 lham:5 jointly:1 sequence:5 rr:4 culty:3 iff:2 intuitive:1 q10:1 convergence:2 empty:2 ben:1 object:1 derive:4 illustrate:2 depending:1 develop:4 progress:1 sa:1 eq:1 p2:5 predicted:2 signi:3 implies:2 differ:1 direction:4 correct:1 hull:1 elimination:1 byj:2 require:1 generalization:1 preliminary:1 anonymous:1 anticipate:1 tighter:2 strictly:2 extension:2 hold:2 normal:10 predict:2 smallest:3 omitted:1 a2:2 estimation:1 travel:1 label:15 combinatorial:1 saw:1 sensitive:1 establishes:2 tool:1 minimization:2 clearly:1 always:2 pn:1 zhou:1 l0:2 q3:5 rank:5 polyhedron:1 w1n:1 sense:1 helpful:4 voronoi:1 typically:1 abor:1 koller:1 interested:1 arg:1 denoted:1 special:1 ernet:1 gallier:1 construct:1 never:1 argmaxt:1 jon:1 future:1 simplex:6 report:1 others:1 np:1 bangalore:1 employ:1 n1:2 microsoft:1 interest:4 satis:2 introduces:1 analyzed:1 pradeep:1 scienti:1 hg:1 edge:3 necessary:6 bq:1 minimal:1 instance:6 column:4 cover:1 yoav:1 cost:1 vertex:1 subset:8 predictor:2 characterize:1 elodie:1 answer:1 angelia:1 calibrated:26 combined:1 thanks:1 fundamental:2 international:6 cantly:2 michael:2 ym:2 iy:1 together:1 central:1 american:1 dimitri:1 return:1 wojciech:1 li:1 account:1 relint:6 de:40 b2:2 automation:1 ranking:13 depends:1 later:1 ramaswamy:1 analyze:4 start:3 bayes:2 option:1 dembczynski:1 carlos:1 minimize:1 square:1 sy:2 yield:1 identify:1 asuman:1 lambert:1 accurately:1 cc:10 cation:61 ed:1 proof:7 associated:4 hamming:3 calauz:1 recall:1 ut:2 formalize:1 actually:1 follow:1 adaboost:1 wei:1 steinwart:2 gray:1 behaved:1 b3:1 building:1 concept:1 true:1 hence:1 q0:5 eyke:1 ay:1 duchi:4 cp:1 abstain:4 ef:1 recently:1 common:2 volume:1 cossock:1 extend:2 association:1 refer:1 dag:1 rd:16 consistency:18 similarly:1 calibration:25 longer:1 operating:6 supervision:1 patrick:1 delaunay:1 recent:3 showed:2 triangulation:1 inf:11 mint:1 route:3 certain:13 binary:5 came:1 nition:10 fen:1 scoring:1 preserving:1 greater:1 guestrin:1 speci:5 converge:1 truthful:1 u0:4 full:3 multiple:1 technical:2 calculation:1 long:1 e1:1 award:1 ravikumar:1 a1:2 prediction:9 regression:8 basic:1 represent:2 kernel:1 agarwal:1 achieved:1 qy:1 buffoni:1 vayatis:1 background:2 subdifferential:2 addition:1 fellowship:1 diagram:1 operate:1 sr:1 kotlowski:1 strict:1 comment:1 jordan:2 ee:1 yang:1 zi:1 pennsylvania:1 fm:3 topology:1 multiclass:25 whether:1 bartlett:2 peter:2 as4:1 ignored:1 useful:3 tewari:2 involve:1 shivani:2 visualized:1 category:1 exist:2 zj:2 discrete:1 write:1 drawn:2 abstaining:1 krzysztof:1 graph:4 year:1 cone:2 dst:1 throughout:1 electronic:1 p3:7 w12:1 appendix:3 bit:6 bound:18 maya:1 jue:1 yielded:1 precisely:1 constraint:1 fourier:1 u1:2 argument:1 min:2 span:1 ned:21 department:2 developing:2 according:3 smaller:1 ene:1 computationally:2 turn:1 r3:2 fail:1 ordinal:8 know:1 usunier:1 available:1 generalizes:1 vernet:1 observe:2 away:1 appropriate:1 generic:1 alternative:2 denotes:10 harish:2 running:1 nullity:5 standardized:1 cally:4 giving:3 forum:1 eliciting:1 implied:1 question:5 quantity:4 already:1 fa:1 surrogate:46 said:1 subspace:4 distance:1 thank:1 athena:1 length:1 z3:2 eunho:1 minimizing:2 equivalently:3 setup:1 robert:2 holding:1 design:3 proper:1 zt:2 contributed:1 upper:9 ord:8 observation:1 markov:1 ingo:2 y1:2 rn:21 pred:10 introduced:1 pair:17 david:2 z1:4 learned:1 polytopes:1 able:1 usually:1 below:3 xm:2 ambuj:2 including:1 max:5 suitable:1 natural:2 regularized:2 predicting:2 zr:1 nedic:1 w11:1 technology:1 ne:6 nding:1 text:1 understanding:3 relative:1 loss:108 admissibility:1 permutation:3 suf:7 interesting:1 acyclic:1 incurred:4 consistent:5 huellermeier:1 principle:1 pi:2 penalized:1 supported:1 weaker:1 india:2 institute:1 absolute:1 listwise:2 boundary:1 dimension:31 xia:1 made:1 transaction:2 hang:1 argmint:2 preferred:1 q4:4 b1:2 conclude:1 learn:5 nicolas:3 obtaining:1 csa:1 williamson:1 cl:1 aistats:1 main:1 linearly:1 noise:2 x1:2 cient:8 en:6 tong:3 n:15 position:1 exponential:1 indo:2 jmlr:1 theorem:16 remained:1 showing:1 er:13 gupta:1 a3:2 exists:4 consist:1 effectively:1 margin:2 generalizing:1 intersection:1 fc:3 simply:2 explore:1 univariate:1 gao:1 motzkin:1 hua:1 corresponds:2 acm:1 conditional:2 viewed:1 goal:3 identity:1 consequently:1 feasible:5 hard:2 included:1 classi:63 lemma:15 e:2 formally:1 support:2 mark:1 indian:1 constructive:1 ex:7 |
3,898 | 4,529 | Interpreting prediction markets: a stochastic
approach
Nicol?
as Della Penna
Research School of Computer Science
The Australian National University
[email protected]
Rafael M. Frongillo
Computer Science Divison
University of California, Berkeley
[email protected]
Mark D. Reid
Research School of Computer Science
The Australian National University & NICTA
[email protected]
Abstract
We strengthen recent connections between prediction markets and learning by showing that a natural class of market makers can be understood
as performing stochastic mirror descent when trader demands are sequentially drawn from a fixed distribution. This provides new insights into how
market prices (and price paths) may be interpreted as a summary of the
market?s belief distribution by relating them to the optimization problem
being solved. In particular, we show that under certain conditions the stationary point of the stochastic process of prices generated by the market
is equal to the market?s Walrasian equilibrium of classic market analysis.
Together, these results suggest how traditional market making mechanisms
might be replaced with general purpose learning algorithms while still retaining guarantees about their behaviour.
1
Introduction and literature review
This paper is part of an ongoing line of research, spanning several authors, into formal
connections between markets and machine learning. In [5] an equivalence is shown between
the theoretically popular prediction market makers based on sequences of proper scoring
rules and follow the regularised leader, a form of no-regret online learning. By modelling
the traders that demand the assets the market maker is offering we are able to extend
the equivalence to stochastic mirror decent. The dynamics of wealth transfer is studied
in [3], for a sequence of markets between agents that behave as Kelly bettors (i.e. have log
utilities), and an equivalence to stochastic gradient decent is analysed. More broadly, [9, 2]
have analysed how a wide range of machine learning models can be implemented in terms
of market equilibria.
The literature on the interpretation of prediction market prices [7, 11] has had the goal of
relating the equilibrium prices to the distribution of the beliefs of traders. More recent work
[8] has looked at a stochastic model, and studied the behavior of simple agents sequentially
interacting with the market. We continue this latter path of research, motivated by the
observation that the equilibrium price may be a poor predictor of the behavior in a volitile
prediction market. As such, we seek a more detailed understanding of the market than the
equilibrium point ? we would like to know what the ?stationary distribution? of the price
is, as time goes to infinity.
1
As is standard in the literature, we assume a fixed (product) distribution over traders? beliefs
and wealth. Our model features an automated market maker, following the framework of [1]
is becoming a standard framework in the field.
We obtain two results. First, we prove that under certain conditions the stationary point
of our stochastic process defined by the market maker and a belief distribution of traders
converges to the Walrasian equilibrium of the market as the market liquidity increases. This
result, stated in Theorem 1, is general in the sense that only technical convergence conditions
are placed on the demand functions of the traders ? as such, we believe it is a generalisation
of the stochastic result of [8] to cases where agents are are not limited to linear demands,
and leave this precise connection to future work.
Second, we show in Corollary 1 that when traders are Kelly bettors, the resulting stochastic
market process is equivalent to stochastic mirror descent; see e.g. [6]. This result adds to
the growing literature which relates prediction markets, and automated market makers in
general, to online learning; see e.g. [1], [5], [3] .
This connection to mirror descent seems to suggest that the prices in a prediction market
at any given time may be meaningless, as the final point in stochastic mirror descent often
has poor convergence guarantees. However, standard results suggest that a prudent way
to form a ?consensus estimate? from a prediction market is to average the prices. The
average price, assuming our market model is reasonable, is provably close to the stationary
price. In Section 5 we give a natural example that exhibits this behavior. Beyond this,
however, Theorem 2 gives us insight into the relationship between the market liquidity and
the?convergence of prices; in particular it suggests that we should increase liquidity at a rate
of t if we wish the price to settle down at the right rate.
2
Model
Our market model will follow the automated market maker framework of [1]. We will equip
our market maker with a strictly convex function C : Rn ? R which is twice continuously
differentiable. For brevity we will write ? := ?C. The outcome space is ?, and the contracts
are determined by a payoff function ? : ? ? Rn such that ? := ?(Rn ) = ConvHull(?(?)).
That is, the derivative space ? of C (the ?instantaneous prices?) must be the convex hull
of the payoffs.
A trader purchasing shares at the current prices ? ? Rn pays C(??1 (?) + r) ? C(??1 (?))
for the bundle of contracts r ? Rn . Note that our dependence solely on ? limits our model
slightly, since in general the share space (domain of C) may contain more information than
the current prices (cf. [1]). The bundle r is determined by an agent?s demand function
d(C, ?) which specifies the bundle to buy given the price ? and the cost function C.
Our market dynamics are the following. The market maker posts the current price ?t , and
at each time t = 1 . . . T , a trader is chosen with demand function d drawn i.i.d. from some
demand distribution D. Intuitively, these demands are parameterized by latent variables
such as the agent?s belief p ? ?? and total wealth W . The price is then updated to
?t+1 = ?(??1 (?t ) + d(C, ?t )).
(1)
After update T , the outcome is revealed and payout ?(?)i is given for each contract i ?
{1, . . . , n}.
3
Stationarity and equilibrium
We first would like to relate our stochastic model (1) to the standard notion of market
equilibrium from the Economics literature, which we call the Walrasian equilibrium to avoid
confusion. Here prices are fixed, and the equilibrium price is one that clears the market,
meaning that the sum of the demands r is 0 ? Rn . In fact, we will show that the stationary
point of our process approaches the Walrasian equilibrium point as the liquidity of the
market approaches infinity.
2
First, we must add P
a liquidity parameter to our market. Following the LMSR (the cost
function C(s) = b ln i esi /b ), we define
Cb (s) := b C(s/b).
(2)
This transformation of a convex function is called a perspective function and is known to
preserve convexity [4]. Observe that ?b (s) := ?Cb (s) = ?C(s/b) = ?(s/b), meaning that
the price under Cb at s is the same as the price under C at s/b. As with the LMSR, we
call b the liquidity parameter ; this terminology is justified by noting that one definition of
liquidity, 1/?max ?2 Cb (s) = b/?max ?2 C(s/b) (cf. [1]). In the following, we will consider the
limit as b ? ?.
Second, in order to connect to the Walrasian equilibrium, we need a notion of a fixed-price
demand function: if a trader has demand d(C, ?) given C, what would the same trader?s
demand be under a market where prices are fixed and do not ?change? during a trade? For
the sake of generality, we restrict our allowable demand functions to the ones for which the
limit
d(F, ?) := lim d(Cb , ?)
(3)
b??
exists; this demand d(F, ?) will be the corresponding fixed-price demand for d. We now define
the Walrasion equilibrium point ? ? , which is simply the price at which the market clears
when traders have demands distributed by D. Formally, this is the following condition:1
Z
d(F, ? ? ) dD(d) = 0
(4)
D
n
Note that 0 ? R ; the demand for each contract should be balanced.
The stationary point of our stochastic process, on the other hand, is the price ?bs for which
the expected price fluctuation is 0. Formally, we have
s
s
E [?(?b , d(Cb , ?b ))] = 0,
(5)
d?D
where ?(?, d) := ?(??1 (?) + d) ? ? is the price fluctuation. We now consider the limit of
our stochastic process as the market liquidity approaches ?.
Theorem 1. Let C be a strictly convex and ?-smooth2 cost function, and assume that
?
?b d(Cb , ?) = o(1/b) uniformly in ? and all d ? D. If furthermore the limit (3) is uniform
in ? and d, then limb?? ?bs = ? ? .
Proof. Note that by the stationarity condition (5) we may define ? ? and ?bs to be the roots
of the following ?excess demand? functions, respectively:
Z
Z(?) :=
d(F, ?) dD(d),
Zbs (?) := b E [?(?, d(Cb , ?))],
d?D
D
where we scale the latter by b so that
?1
Let s = ?
Zbs
does not limit to the zero function.
(?) be the current share vector. Then we have
lim b?(?, d(Cb , ?)) = lim b ? ??1 (?) + d(Cb , ?)/b ? ?
b??
? s + a d(C1/a , ?) ? ?
= lim
a?0
a
?
= lim ?? s + a d(C1/a , ?) d(C1/a , ?) + a ?a
d(C1/a , ?)
a?0
?
= lim ?? s + 1b d(Cb , ?) d(Cb , ?) + 1b ?b
d(Cb , ?)(?b2 )
b??
b??
= lim ?2 C(s) d(Cb , ?) = ?2 C(s) d(F, ?),
b??
1
Here and throughout we ignore technical issues of uniqueness. One may simply restrict to the
class of demands for which uniqueness is satisfied.
2
C is ?-smooth if ?max ?2 C ? ?
3
where we apply L?Hopital?s rule for the third equality. Crucially, the above limit is uniform
with respect to both d ? D and ? ? ?; uniformity in d is by assumption, and uniformity in
? follows from ?-smoothness of C, since C is dominated by a quadratic. Since the limit is
uniform with respect to D, we now have
s
lim Zb (?) = lim b E [?(?, d(Cb , ?))] = E lim b?(?, d(Cb , ?))
b??
b??
d?D
d?D
2
b??
2
= ? C(s) E [d(F, ?)] = ? C(s) Z(?).
d?D
As ?2 C(s) is positive definite by assumption on C, we can conclude that limb?? Zbs and
Z share the same zeroes. Since Z has compact domain and is assumed continuous with a
unique zero ? ? , for each ? (0, max ) there must be some ? > 0 s.t. |Z(?)| > for all ? s.t.
k? ? ? ? k > ? (otherwise there would be a sequence of ?n ? ? 0 s.t. f (? 0 ) = 0 but ? 0 6= ? ? ).
By uniform convergence there must be a B > 0 s.t. for all b > B we have kZbs ? Zk? < /2.
In particular, for ? s.t. k? ? ? ? k > ?, |Zbs (?)| > /2. Thus, the corresponding zeros ?bs must
be within ? of ? ? . Hence limb?? ?bs = ? ? .3
3.1
Utility-based demands
Maximum Expected Utility (MEU) demand functions are a particular kind of demand function derived by assuming a trader has some belief p ? ?n over the outcomes in ?, some
wealth W ? 0, and a monotonically increasing utility function of money u : R ? R. If such
a trader buys a bundle r of contracts from a market maker with cost function C and price ?,
her wealth after ? occurs is ?? (C, W, ?, r) := W +?(?)?r?[C(??1 (?)+r)?C(??1 (?))]. We
ensure traders do not go into debt by requiring that traders only make demands such that
this final wealth is nonnegative: ?? ?? (C, ?, r) ? 0. The set of debt-free bundles for wealth
W and market C at price ? is denoted S(C, W, ?) := {r ? Rn : min? ?? (C, W, ?, r) ? 0}.
A continuous MEU demand function duW,p (C, ?) is then just the demand that maximizes a
trader?s expected utility subject to the debt-free constraint. That is,
duW,p (C, ?) := argmax
E [u (?? (C, W, ?, r))] .
(6)
r?S(C,W,?) ??p
We also define a fixed-price MEU demand function duW,p (F, ?) similarly, where
?? (F, W, ?, r) := W + ?(?) ? r ? ? ? r and S(F, W, ?) := {r ? Rn : min? ?? (F, W, ?, r) ? 0}
are the fixed price analogues to the continuously priced versions above. Using the notation
bS := {b r | r ? S}, the following relationships between the continuous and fixed price versions of ?, SW , and the expected utility are a consequence of the convexity of C. Their main
purpose is to highlight the relationship between wealth and liquidity in MEU demands. In
particular, they show that scaling up of liquidity is equivalent to a scaling down of wealth
and that the continuously priced constraints and wealth functions monotonically approach
the fixed priced versions.
Lemma 1. For any strictly convex cost function C, wealth W > 0, price ?, demand
r, and liquidity parameter b > 0 the following properties hold:
1. ?? (Cb , W, ?, r) =
b ?? (C, W/b, ?, r/b); 2. S(Cb , W, ?) = b S(C, W/b, ?); 3. S(C, W, ?) is convex for all
C; 4. S(C, W, ?) ? S(Cb , W, ?) ? S(F, W, ?) for all b ? 1.
5. For monotone utilities
u, E??p [u (?? (F, W, ?, r))] ? E??p [u (?? (C, W, ?, r))].
Proof. Property (1) follows from a simple computation:
?? (Cb , W, ?, r) = W + ?(?) ? r ? b C(??1 (?) + r/b) + b C(??1 (?))
= b W/b + ?(?) ? (r/b) ? C(??1 (?) + r/b) + C(??1 (?)) ,
which equals b ?? (C, W/b, ?, r/b) by definition. We now can see property (2) as well:
S(Cb , W, ?) = {r : min b ?? (C, W/b, ?, r/b) ? 0} = {b r : min ?? (C, W/b, ?, r) ? 0}.
?
3
?
We thank Avraham Ruderman for a helpful discussion regarding this proof.
4
For (3), define fC,s,? (r) = C(s + r) ? C(s) ? ?(?) ? r, which is the ex-post cost of purchasing
bundle r. As C is convex, and fC,s,? is a shifted and translated version of C plus a linear
term, fC,s,? is convex also. The constraint ?? (C, W, ?, r) ? 0 then translates to fC,s,? (r) ?
W , and thus the set of r which satisfy the constraint is convex as a sublevel set of a convex
function. Now S(C, W, ?) is convex as an intersection of convex sets, proving (3).
For (4) suppose r satisfies fC,s,? (r) ? W . Note that fC,s,? (0) = 0 always. Then by
1
b?1
convexity we have for f := fC,s,? we have f (r/b) = f 1b r + b?1
b 0 ? b f (r) + b 0 ? W/b,
which implies S(C, W, ?) ? S(Cb , W, ?) when considering (3). To complete (4) note that
fC,s,? dominates fF,s : r 7? (?(s) ? ?(?)) ? r by convexity of C: C(s + r) ? C(s) ? ?C(s) ? r.
Finally, proof of (5) is obtained by noting that the convexity of C means that C(??1 (?) +
r) ? C(??1 (?)) ? ?C(??1 (?)) ? r = ? ? r and exploting the monotonicty of u.
Lemma 1 shows us that MEU demands have a lot of structure, and in particular, properties
(4) and (5) suggest that they may satisfy the conditions of Theorem 1; we leave this as an
open question for future work. Another interesting aspect of Lemma 1 is the relationship
between markets with cost function Cb and wealths W and markets with cost function C
and wealths W/b ? indeed, properties (1) and (2) suggest that the liquidity limit should
in some sense be equivalent to a wealth limit, in that increasing liquidity by a factor b
should yield similar dynamics to decreasing the wealths by b. This would relate our model
to that of [8], where the authors essentially show a wealth-limit version of Theorem 1 for a
binary-outcome market where traders have linear utilities (a special case of (6)). We leave
this precise connection for future work.
4
Market making as mirror descent
We now explore the surprising relationship between our stochastic price update and standard
stochastic optimization techniques. In particular, we will relate our model to a stochastic
mirror descent of the form
xt+1 = argmin{? x ? ?F (xt ; ?) + DR (x, xt )},
(7)
x?R
where at each step ? ? ? are i.i.d. and R is some strictly convex function. We will refer to
an algorithm of the form (7) a stochastic mirror descent of f (x) := E??? [F (x; ?)].
Theorem 2. If for all d ? D we have some F (? ; d) : Rn ? Rn such that d(R? , ?) =
??F (?; d), then the stochastic update of our model (1) is exactly a stochastic mirror descent
of f (?) = Ed?D [F (?; d)].
Proof. By standard arguments, the mirror descent update (7) can be rewritten as
xt+1 = ?R? (?R(xt ) ? ?F (xt ; ?)),
where R? is the conjugate dual of R. Take R = C ? , and let ? = d ? D. By assumption,
we have ?F (x; d) = ?d(R? , x) = ?d(C, x) for all d. As ?R? = ?C = ?, we have ??1 =
(?R? )?1 = ?R by duality, and thus our update becomes xt+1 = ? ??1 (xt ) + d(C, xt ) ,
which exactly matches the stochastic update of our model (1).
As an example, consider Kelly betters, which correspond to fixed-price demands d(C, ?) :=
dlog
W,p (F, ?) with utility u(x) = log x as defined in (3). A simple calculation shows that our
update becomes
W p??
?t+1 = ? ??1 (?t ) +
,
(8)
? 1??
where W and p are drawn (independently) from P and W.
Corollary 1. The stochastic update for fixed-price Kelly betters (8) is exactly a stochastic mirror descent of f (?) = W ? KL(p, ?), where p and W are the means of P and W,
respectively.
5
Proof. We take F (x; dlog
W,p ) = W ? (KL(p, x) + H(p)). Then
?p p ? 1
W p?x
log
?F (x; dW,p ) = W
+
=?
= ?dlog
W,p (F, x).
x
1?x
x 1?x
Hence, by Theorem 2 our update is a stochastic mirror descent of:
f (x) := E[F (x; dlog
W,p )] = E[W p log x + W (1 ? p) log(1 ? x)] = W ? (KL(p, x) + H(p)) ,
which of course is equivalent to W ? KL(p, x) as the entropy term does not depend on x.
Note that while this last result is quite compelling, we have mixed fixed-price demands with
a continuous-price market model ? see Section 3.1. One could interpret this combination as
a model in which the market maker can only adjust the prices after a trade, according to a
fixed convex cost function C. This of course differs from the standard model, which adjusts
the price continuously during a trade.
4.1
Leveraging existing learning results
Theorem 2 not only identifies a fascinating connection between machine learning and our
stochastic prediction market model, but it also allows us to use powerful existing techniques
to make broad conclusions about the behavior of our model. Consider the following result:
Proposition 1 ([6]). If k?F (?; p)k2 ? G2 for all p, ?, and R is
with probability 1 ? ?,
r
2
D
G2 ?
f (? T ) ? min f (?) +
+
1 + 4 log
?
?T
2?
0.60
0.55
Price of contract 1
0.65
0.70
Price
Avg price
Avg belief
0.50
In our context, Proposition 1 says that the average
of the prices will be a very good estimate of the minimizer of f , which as suggested by happens to be the
underlying mean belief p of the traders! Moreover, as
the Kelly demands are linear in both p and W , it is
easy to see from Theorem 1 that p is also the stationary point and the Walrasian equilibrium point (the
latter was also shown by [11]). On the other hand, as
we demonstrate next, it is not hard to come up with
an example where the instantaneous price ?t is quite
far from the equilibrium at any given time period.
?-strongly convex, then
!
1
.
?
0
500
1000
1500
2000
Trade number
Before moving to our empirical work, we make one Figure 1: Price movement for Kelly
final point. The above relationship between our betters with binomial(q = 0.6, n = 6,
stochastic market model and mirror descent sheds ? = 0.5) beliefs in the LMSR market
light on an important question: how might an auto- with liquidity b = 10.
mated market maker adjust the liquidity so that the
market actually converges to the mean of the traders? beliefs? The learning parameter ?
can be thought of as the inverse
of the liquidity, and as such, Proposition 1 suggests that
?
increasing the liquidity as t may cause the mean price to converge to the mean belief
(assuming a fixed underlying belief distribution).
5
Empirical work
Example: biased coin Consider a classic Bayesian setting where a coin has unknown
bias Pr[heads] = q, and traders have a prior ?(?, ?) over q (i.e., traders are ?-confident that
the coin is fair). Now suppose each trader independently observes n flips from the coin, and
updates her belief; upon seeing k heads, a trader would have posterior ?(? + k, ? + n ? k).
When presented with a prediction market with contracts for a single toss of the coin, where
and contract 0 pays $1 for tails and contract 1 pays $1 for heads, a trader would purchase
6
0
20
40
60
Trades
80
100
b = 10
Instant
Averaged
0.06
Loss
0.04
0.02
0.00
0.02
0.04
Loss
0.06
0.08
Instant
Averaged
0.00
0.00
0.02
0.04
Loss
0.06
0.08
Instant
Averaged
Square loss of price to mean belief for State 9
b=3
0.08
b=1
0.10
Square loss of price to mean belief for State 9
0.10
0.10
Square loss of price to mean belief for State 9
0
20
40
60
Trades
80
100
0
20
40
60
80
100
Trades
Figure 2: Mean square loss of average and instantaneous prices relative to the mean belief
of 0.26 over 20 simulations for State 9 for b = 1 (left), b = 3 (middle), and b = 10 (right).
Bars show standard deviation.
contracts as if according to the mean of their posterior. Hence, the belief distribution P of
the market assigns weight P(p) = nk q k (1 ? q)n?k to belief p = (? + k)/(2? + n), yielding
a biased mean belief of (? + nq)/(2? + n).
We show a typical simulation of this market in Figure 1, where traders behave as Kelly
betters in the fixed-price LMSR. Clearly, after almost every trade, the market price is
quite far from the equilibrium/stationary point, and hence the classical supply and demand
analysis of this market yields a poor description of the actual behavior, and in particular, of
the predictive quality of the price at any given time. However, the mean price is consistently
close to the mean belief of the traders, which in turn is quite close to the true parameter q.
Election Survey Data We now compare the quality of the running average price versus
the instantaneous price as a predictor of the mean belief of a market. We do so by simulating
a market maker interacting with traders with unit wealth, log utility, and beliefs drawn from
a fixed distribution. The belief distributions are derived from the Princeton election survey
data[10]. For each of the 50 US states, participants in the survey were asked to estimate
the probability that one of two possible candidates were going to win that state.4 We use
these 50 sets of estimates as 50 different empirical distributions from which to draw trader
beliefs.
A simulation is configured by choosing one of the 50 empirical belief distributions
S, a
P
market liquidity parameter b to define the LMSR cost function C(s) = b ln i esi /b , and an
initial market position vector of (0, 0) ? that is, no contracts for either outcome. A configured
simulation is run for T trades. At each trade, a belief p is drawn from S uniformly and
with replacement. This belief is used to determine the demand of the trader relative to the
current market pricing. The trader purchase a bundle of contracts according to its demand
and the market moves its position and price accordingly. The complete price P
path ?t for
t
t = 1, . . . , T of the market is recorded as well as a running average price ?
?t := 1t i=1 ?t for
t = 1 . . . , T . For each of the 50 empirical belief distributions we configured 9 markets with
b ? {1, 2, 3, 5, 10, 15, 20, 30, 50} and ran 20 independent simulations of T = 100 trades. We
present a portion of the results for the empirical distributions for states 9 and 11. States 9
and 11 have, respectively, sample sizes of 2,717 and 2,709; means 0.26 and 0.9; and variances
0.04 and 0.02. These are chosen as being representative of the rest of the simulation results:
State 9 with mean off-center and a spread of beliefs (high uncertainty) and State 11 with
highly concentrated beliefs around a single outcome (low uncertainty).
The results are summarised in Figures 2, 3, and 4. The first show the square loss of the
average and instaneous prices relative to the mean belief for high uncertainty State 9 for
b = 1, 3, 10. Clearly, the average price is a much more reliable estimator of the mean belief
for low liquidity (b = 1) and is only outperformed by the instaneous price for higher liquidity
(b = 10), but then only early in trading. Similar plots for State 11 are shown in Figure 3
where the advantage of using the average price is significantly diminished.
4
The original dataset contains conjunctions of wins as well as conditional statements but we
only use the single variable results of the survey.
7
0
20
40
60
80
100
b = 10
Instant
Averaged
0.06
Loss
0.04
0.02
0.00
0.02
0.04
Loss
0.06
0.08
Instant
Averaged
0.00
0.00
0.02
0.04
Loss
0.06
0.08
Instant
Averaged
Square loss of price to mean belief for State 11
b=3
0.08
b=1
0.10
Square loss of price to mean belief for State 11
0.10
0.10
Square loss of price to mean belief for State 11
0
20
40
Trades
60
80
100
0
20
40
Trades
60
80
100
Trades
Figure 3: Mean square loss of average and instantaneous prices relative to the mean belief
of 0.9 over 20 simulations for State 11 for b = 1 (left), b = 3 (middle), and b = 10 (right).
Bars show standard deviation.
Figure 4 shows the improvement the average price has over the instantaneous price in square
loss relative to the mean belief for all liquidity settings and highlights that average prices
work better in low liquidity settings, consistent with the theory. Similar trends were observed
for all the other States, depending on whether they had high uncertainty ? in which case
average price was a much better estimator ? or low uncertainty ? in which case instanteous
price was better.
Improvement of Average over Instant Prices for State 9
Improvement of Average over Instant Prices for State 11
0.06
0.02
0.00
0.04
Loss Di
fference
fference
-0.04
0.00
-0.06
-0.02
-0.08
10
20
100
80
80
30
40
40
30
60
Tra
des
b
60
Tra
des
10
20
100
20
b
Loss Di
-0.02
0.02
40
40
20
50
50
Figure 4: An overview of the results for States 9 (left) and 11 (right). For each trade
and choice of b, the vertical value shows the improvement of the average price over the
instantaneous price as measure by square loss relative to the mean.
6
Conclusion and future work
As noted in Section 3.1, there are several open questions with regard to maximum expected
utility demands and Theorem 1, as well as the relationship between trader wealth and market
liquidity. It would also be interesting to have a application of Theorem 2 to a continuousprice model, which yields a natural minimization as in Corollary 1. The equivalence to
mirror decent stablished in Theorem 2 may also lead to a better understanding of the
optimal manner in which a automated prediction market ought to increase liquidity so as
to maximise efficiency.
Acknowledgments
This work was supported by the Australian Research Council (ARC). NICTA is funded
by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the ARC through the ICT Centre of Excellence
program. The first author was partially supported by NSF grant CC-0964033 and by a
Google University Research Award.
8
References
[1] J. Abernethy, Y. Chen, and J.W. Vaughan. An optimization-based framework for
automated market-making. In Proceedings of the 11th ACM conference on Electronic
Commerce (EC?11), 2011.
[2] A. Barbu and N. Lay. An introduction to artificial prediction markets for classification.
Arxiv preprint arXiv:1102.1465, 2011.
[3] A. Beygelzimer, J. Langford, and D. Pennock. Learning Performance of Prediction
Markets with Kelly Bettors. 2012.
[4] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004.
[5] Y. Chen and J.W. Vaughan. A new understanding of prediction markets via no-regret
learning, pages 189?198. 2010.
[6] J. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari. Composite objective mirror
descent. COLT, 2010.
[7] C.F. Manski. Interpreting the predictions of prediction markets. Technical report,
National Bureau of Economic Research, 2004.
[8] A. Othman and T. Sandholm. When do markets with simple agents fail? In Proceedings
of the 9th International Conference on Autonomous Agents and Multiagent Systems:
volume 1-Volume 1, pages 865?872. International Foundation for Autonomous Agents
and Multiagent Systems, 2010.
[9] A. Storkey. Machine learning markets. AISTATS, 2012.
[10] G. Wang, S.R. Kulkarni, H.V. Poor, and D.N. Osherson. Aggregating large sets of
probabilistic forecasts by weighted coherent adjustment. Decision Analysis, 8(2):128,
2011.
[11] J. Wolfers and E. Zitzewitz. Interpreting prediction market prices as probabilities.
Technical report, National Bureau of Economic Research, 2006.
9
| 4529 |@word middle:2 version:5 seems:1 open:2 seek:1 crucially:1 simulation:7 initial:1 contains:1 offering:1 existing:2 current:5 com:1 surprising:1 analysed:2 beygelzimer:1 must:5 plot:1 update:10 stationary:8 nq:1 accordingly:1 provides:1 supply:1 prove:1 manner:1 divison:1 excellence:1 theoretically:1 expected:5 indeed:1 market:77 behavior:5 growing:1 decreasing:1 actual:1 election:2 considering:1 increasing:3 becomes:2 notation:1 underlying:2 maximizes:1 moreover:1 what:2 kind:1 interpreted:1 argmin:1 transformation:1 ought:1 guarantee:2 berkeley:2 every:1 shed:1 exactly:3 k2:1 unit:1 grant:1 reid:2 positive:1 before:1 understood:1 maximise:1 aggregating:1 limit:11 consequence:1 path:3 becoming:1 solely:1 fluctuation:2 might:2 plus:1 twice:1 au:1 studied:2 equivalence:4 suggests:2 limited:1 range:1 averaged:6 unique:1 acknowledgment:1 commerce:1 regret:2 definite:1 differs:1 empirical:6 thought:1 significantly:1 boyd:1 composite:1 seeing:1 suggest:5 close:3 context:1 vaughan:2 equivalent:4 center:1 go:2 economics:1 independently:2 convex:16 survey:4 assigns:1 insight:2 rule:2 adjusts:1 estimator:2 vandenberghe:1 dw:1 classic:2 proving:1 notion:2 autonomous:2 updated:1 suppose:2 strengthen:1 barbu:1 regularised:1 trend:1 storkey:1 lay:1 observed:1 preprint:1 solved:1 wang:1 trade:15 movement:1 observes:1 ran:1 balanced:1 convexity:5 asked:1 esi:2 dynamic:3 convhull:1 uniformity:2 depend:1 predictive:1 manski:1 upon:1 efficiency:1 translated:1 osherson:1 represented:1 artificial:1 outcome:6 choosing:1 abernethy:1 shalev:1 quite:4 say:1 otherwise:1 final:3 online:2 sequence:3 differentiable:1 advantage:1 product:1 description:1 convergence:4 converges:2 leave:3 depending:1 school:2 implemented:1 c:1 implies:1 australian:4 come:1 trading:1 stochastic:26 hull:1 settle:1 government:1 behaviour:1 proposition:3 strictly:4 hold:1 around:1 equilibrium:16 cb:23 early:1 purpose:2 uniqueness:2 outperformed:1 maker:13 council:1 lmsr:5 weighted:1 minimization:1 clearly:2 always:1 avoid:1 frongillo:1 conjunction:1 corollary:3 derived:2 improvement:4 consistently:1 modelling:1 sense:2 helpful:1 economy:1 her:2 going:1 provably:1 issue:1 dual:1 classification:1 colt:1 prudent:1 denoted:1 retaining:1 special:1 equal:2 field:1 broad:1 future:4 purchase:2 report:2 preserve:1 national:4 replaced:1 argmax:1 replacement:1 stationarity:2 highly:1 adjust:2 yielding:1 light:1 bundle:7 bettor:3 compelling:1 cost:10 deviation:2 trader:32 predictor:2 uniform:4 connect:1 confident:1 international:2 contract:12 off:1 probabilistic:1 together:1 continuously:4 satisfied:1 recorded:1 sublevel:1 payout:1 dr:1 derivative:1 de:2 b2:1 configured:3 satisfy:2 tra:2 root:1 lot:1 portion:1 participant:1 raf:1 square:11 variance:1 yield:3 correspond:1 bayesian:1 asset:1 cc:1 penna:1 ed:1 definition:2 proof:6 di:2 dataset:1 popular:1 lim:10 actually:1 higher:1 follow:2 strongly:1 generality:1 furthermore:1 just:1 langford:1 hand:2 ruderman:1 google:1 quality:2 pricing:1 believe:1 contain:1 true:1 requiring:1 equality:1 hence:4 during:2 noted:1 allowable:1 complete:2 demonstrate:1 confusion:1 duchi:1 interpreting:3 meaning:2 instantaneous:7 overview:1 volume:2 wolfers:1 extend:1 interpretation:1 tail:1 relating:2 interpret:1 refer:1 cambridge:1 smoothness:1 similarly:1 centre:1 had:2 funded:1 moving:1 money:1 add:2 posterior:2 recent:2 perspective:1 certain:2 binary:1 continue:1 scoring:1 zitzewitz:1 converge:1 determine:1 period:1 monotonically:2 relates:1 smooth:1 technical:4 match:1 calculation:1 post:2 award:1 prediction:17 essentially:1 arxiv:2 c1:4 justified:1 wealth:18 biased:2 meaningless:1 rest:1 pennock:1 subject:1 leveraging:1 call:2 noting:2 revealed:1 easy:1 decent:3 automated:5 restrict:2 economic:2 regarding:1 othman:1 translates:1 whether:1 motivated:1 utility:11 cause:1 tewari:1 detailed:1 clear:2 concentrated:1 specifies:1 nsf:1 shifted:1 broadly:1 summarised:1 write:1 terminology:1 drawn:5 monotone:1 sum:1 run:1 inverse:1 parameterized:1 powerful:1 uncertainty:5 throughout:1 reasonable:1 almost:1 electronic:1 draw:1 decision:1 scaling:2 pay:3 quadratic:1 fascinating:1 nonnegative:1 infinity:2 constraint:4 sake:1 dominated:1 aspect:1 argument:1 min:5 mated:1 performing:1 department:1 according:3 combination:1 poor:4 conjugate:1 sandholm:1 slightly:1 making:3 b:6 happens:1 intuitively:1 dlog:4 pr:1 ln:2 turn:1 mechanism:1 fail:1 singer:1 know:1 flip:1 rewritten:1 apply:1 observe:1 limb:3 simulating:1 coin:5 original:1 bureau:2 binomial:1 running:2 cf:2 meu:5 ensure:1 sw:1 instant:8 classical:1 move:1 objective:1 question:3 looked:1 occurs:1 dependence:1 traditional:1 exhibit:1 gradient:1 win:2 thank:1 me:1 consensus:1 spanning:1 nicta:2 equip:1 assuming:3 relationship:7 statement:1 relate:3 stated:1 proper:1 unknown:1 vertical:1 observation:1 arc:2 descent:13 behave:2 payoff:2 communication:1 precise:2 head:3 interacting:2 rn:10 kl:4 connection:6 california:1 coherent:1 able:1 beyond:1 suggested:1 bar:2 program:1 max:4 debt:3 reliable:1 belief:38 analogue:1 natural:3 identifies:1 auto:1 review:1 literature:5 kelly:8 understanding:3 prior:1 nicol:1 ict:1 relative:6 loss:19 multiagent:2 highlight:2 mixed:1 interesting:2 versus:1 digital:1 foundation:1 agent:8 purchasing:2 consistent:1 dd:2 share:4 course:2 summary:1 placed:1 last:1 free:2 supported:2 formal:1 bias:1 wide:1 priced:3 distributed:1 liquidity:24 regard:1 author:3 avg:2 far:2 ec:1 excess:1 compact:1 ignore:1 rafael:1 sequentially:2 buy:2 conclude:1 assumed:1 leader:1 shwartz:1 continuous:4 latent:1 transfer:1 zk:1 domain:2 aistats:1 main:1 spread:1 fair:1 representative:1 broadband:1 ff:1 position:2 wish:1 candidate:1 third:1 theorem:12 down:2 xt:9 showing:1 dominates:1 exists:1 mirror:15 anu:1 demand:36 nk:1 chen:2 forecast:1 entropy:1 intersection:1 fc:8 simply:2 explore:1 adjustment:1 g2:2 partially:1 minimizer:1 satisfies:1 acm:1 conditional:1 goal:1 toss:1 price:81 change:1 hard:1 diminished:1 generalisation:1 determined:2 uniformly:2 typical:1 lemma:3 zb:5 total:1 called:1 duality:1 formally:2 mark:2 latter:3 brevity:1 kulkarni:1 ongoing:1 princeton:1 della:1 ex:1 |
3,899 | 453 | Experimental Evaluation of Learning in a Neural Microsystem
Joshua Alspector Anthony Jayakumar Stephan Lunat
Bellcore
Morristown, NJ 07962-1910
Abstract
We report learning measurements from a system composed of a cascadable
learning chip, data generators and analyzers for training pattern presentation,
and an X-windows based software interface. The 32 neuron learning chip has
496 adaptive synapses and can perform Boltzmann and mean-field learning
using separate noise and gain controls. We have used this system to do learning
experiments on the parity and replication problem. The system settling time
limits the learning speed to about 100,000 patterns per second roughly
independent of system size.
1. INTRODUCTION
We have implemented a model of learning in neural networks using feedback
connections and a local 1earning rule. Even though back-propagation[l)
(Rwnelhart,1986) networks are feedforward in processing, they have separate. implicit
feedback paths during learning for error pro~gation. Networks with explicit, full-time
feedback paths can perform pattern completion!21 (Hopfield,1982), can learn many-lO-One
mappings. can learn probability disuibutions. and can have interesting temporal and
dynamical properties in contrast to the single forward pass processing of multilayer
perceptrons trained with back-propagation or other means. Because of the potential for
complex dynamics. feedback networks require a reliable method of relaxation for
learning and reuieval of static patterns. The Boltzmann machine!3] (Ackley,1985) uses
stochastic settling while the mean-field theory version[4] (peterson.1987) uses a more
computationally efficient deterministic technique.
We have previously shown that Boltzmann learning can be implemented in VLSI(S]
(Alspector,1989). We have also shown, by simulation,[6] (Alspector, 1991a) that
Boltzmann and mean-field networks can have powerful learning and representation
properties just like the more thoroughly studied back-propagation methods. In this paper,
we demonstrate these properties using new, expandable parallel hardware for on-chip
learning.
t
Pennanenl address: University of California, Bericeley; EECS Dep't, Cory Hall; Berlceley, CA 94720
871
872
Alspector, Jayakumar, and Luna
1. VLSI IMPLEMENTATION
1.1 Electronic Model
We have implemented these feedback networks in VLSI which speeds up learning by
many orders of magnitude due to the parallel nature of weight adjustment and neuron
state update. Our choice of learning technique for implementation is due mainly to the
local learning rule which makes it much easier to cast these networks into electronics
than back-propagation.
Individual neurons in the Boltzmann machine have a probabilistic decision rule such that
neuron i is in state Sj 1 with probability
=
Pr(Sj
where
Wj
1
=1)= -~=?
l+e-. rr
(1)
= ~WjjSj is the net input to each neuron calculated by current summing and T
j
is a parameter that acts like temperature in a physical system and is represented by the
noise and gain terms in Eq. (2), which follows. In the electronic mooel we use, each
neuron performs the activation computation
Sj
= f (~* (Uj+Vj?
(2)
where f is a monotonic non-linear function such as tanh. The noise, v, is chosen from a
zero mean gaussian distribution whose width is proportional to the temperature. This
closely approximates the distribution in Eq. (1) and comes from our hardware
implementation, which supplies uncorrelated noise in the form of a binomial
distribution[7] (Alspector,I991b) to each neuron. The noise is slowly reduced as
annealing proceeds. For mean-field learning, the noise is zero but the gain, ~, has a finite
value proponional to liT taken from the annealing schedule. Thus the non-linearity
sharpens as 'annealing' proceeds.
The network is annealed in two phases, + and -, corresponding to clamping the outputs
in the desired state (teacher phase) and allowing them to run free (student phase) at each
pattern presentation. The learning rule which adjusts the weights Wjj from neuron j to
neuron i is
(3)
Note that this measures the instantaneous correlations after annealing. For both phases
each synapse memorizes the correlations measured at the end of the annealing cycle and
weight adjustment is then made, (Le., online). The sgn matches our hardware
implementation which changes weights by one each time.
1.1 Learning Microchip
Fig. 1 shows the learning microchip which has been fabricated. It contains 32 neurons
and 992 connections (496 bidirectional synapses). On the extreme right is a noise
generator which supplies 32 un correlated pseudo-random noise sources[7]
(Alspector,I991b) to the neurons to their left. These noise sources are summed in the
form of current along with the weighted post-synaptic signals from other neurons at the
input to each neuron in order to implement the simulated annealing process of the
stochastic Boltzmann machine. The neuron amplifiers implement a non-linear activation
Experimental Evaiuarion of Learning in a Neural Microsysrem
?.?
??? ????
?
?????? I
????
.. ..
.
...'" .-.
.. .'"
,.,.
.. .-.... .-.'".. ....
....
""
.....
...
'
..
If . . . . . . 11 ? ? ? ? ? ?
? ? ? It ? ? ?
Figure 1. Photo of 32-Neuron Cascadable Learning Chip
function which has variable gain to provide for the gain sharpening function of the
mean-field technique. The range of neuron gain can also be adjusted to allow for scaling
in summing currents due to adjustable network size.
Most of the area is occupied by the synapse array. Each synapse digitally stores a weight
ranging from -15 to +15 as 4 bits plus a sign. It multiples the voltage input from the
presynaptic neuron by this weight to output a current. One conductance direction can be
disconnected so that we can experiment with asymmetric networks[8) (Allen, 1990).
Although the synapses can have their weights set externally, they are designed to be
adaptive. They store correlations. in parallel, using the local learning rule of Eq. (3) and
adjust their weights accordingly. A neuron state range of -Ito 1 is assumed by the digital
learning processor in each synapse on the chip.
Fig. 2a shows a family of transfer functions of a neuron. showing how the gain is
continually adjustable by varying a control voltage. Fig. 2b shows the transfer function
of a synapse as different weights are loaded. The input linear range is about 2 volts.
Fig. 3 shows waveforms during exclusive-OR learning using the noise annealing of the
Boltzmann machine. The top three traces are hidden neurons while the bottom trace is
the output neuron which is clamped during the + phase. There are two input patterns
presented during the time interval displayed, (-1,+1) and (+1,-1), both of which should
output a +1 (note the state clamped to high voltage on the output neuron). Note the
sequence of steps involved in each pattern presentation. 1) Outputs from the previous
pattern are unclamped. 2) The new pattern is presented to the input neurons. 3) Noise is
presented to the network and annealed. 4) The student phase latch captures the
873
874
Alspector, Jayakumar, and Luna
Measured synapse transler lunction
Measured Neuron Transler Function
4
______-
-
11
-----
-11
:>
(I)
(l)
!
3
"0
>
'3
B-::l
2
a
15
o
LL~~-LLL~~-LLL~~-LLL~~LL~
-200
-300
-100
0
al
100
1.5
Input current (flAl
2
2.5
J
15
bl Input voltage (Vl
Figure 2. Transfer Functions of Electronic Neuron (2a) and Synapse (2b)
correlations. 5) Data from the neuron states is read into the data analyzer. 6) The output
neurons are clamped (no annealing is necessary for a three layer network). 7) The
teacher phase latch captures the correlations. 8) Weights are adjusted (go to step 1).
__
8.85000 ms
11.3500 ms
13 . 8500 ms
~~~~_=-~~ :~=t=-??~~~--~=~~-~~~~~
~
J
~
F~~~~:~~~-'~=~-:=:;=~~= \_ ~~~
~
~
,-
':.
~ 1. 411 2 uAu" U
Channel 1 Channel 2 ?
Channel 3 -
Chennel
Tlmebase
~.
?
5.000
5.000
5.000
5.000
500
? + _ _/ ' :
5
VoltS/dlv
Vult:-!'l1v
Volts/dlv
Volt./dly
us/div
??
!--+-._y
U
,,:f:~ __+ - _ .+__
6u7ua~41I 2 ? ."l: 4 u
5
Offset
Uff'3p.t
Offset
Off.et
Delay
H
6 u 7... a .-
..
?
2.~U!J
?
8.85000 ms
?
2 . 500
2.500
2.500
?1
Volts
Vol':'
Volts
Volt.
Figure 3. Neuron Signals during Learning (see text for steps involved)
Fig. 4a shows an expanded view of 4 neuron waveforms during the noise annealing
portion of the chip operation during Boltzmann learning. Fig. 4b shows a similar portion
during gain annealing. Note that, at low gain. the neuron states start at 2.5 volts and
settle to an analog value between 0 and 5 volts. For the purposes of classification for the
Experimental Evaluation of Learning in a Neural Microsystem
58.0000 UI
158.000 UI
- - - - - - - - - - - - ----o
---i---~-
t
._
- -
Ch.nn.ll~ ~ 000 vii!t 17ii ~ --1.: .... ".1 .l - : . ,,:vJ V' ... l~.l:li ..?
Channel 3 -
Chann.l ??
Tlaeculse
?
5 000
5.000
20
a
UoltS / dl'
VDlt./dlv
uS / dlV
?
- - - .. - - - - + - - - - 0 - -
- - - ' -- - - - .
- ""',
a; ,-at
- . - 2. 500
-!.!.
- 2.5 ... 0
I
Olfset
orr ?? t
DelilY
?
?
Z 500
2.500
Vii Its
volt.
Voltl
VDlt.
... -492000 UI
Figure 4. Neuron Signals during Annealing with Noise (4a) and Gain (4b)
digital problems we investigated, neurons are either + lor?} depending on whether their
voltage is above or below 2.5 volts. This isn't clear until after settling. There are several
instances in Figs. 3 and 4 where the neuron state changes after noise or gain annealing.
The speed of pattern presentations is limited by the length of the annealing signal for
system settling (100 ~ in Fig. 3). The rest of the operations can be made negligibly
short in comparison. The annealing time could be reduced to 10 ~ or so, leading to a
rate of about 100,000 patterns/sec. In comparison, a 10-10-10 replication problem,
which fits on a single chip, takes about a second per panern on a SPARCstation 2. This
time scales roughly with the number of weights on a sequential machine, but is almost
constant on the learning chip due to its parallel nature.
We can do even larger problems in a multiple chip system because the chip is designed to
be cascaded with other similar chips in a board-level system which can be accessed by a
computer. The nodes which sum current from synapses for net input into a neuron are
available externally for connection to other chips and for external clamping of neurons or
other external input We are currently building such a system with a VME bus interface
for tighter coupling to our software than is allowed by the GPIB instrument bus we are
using at the time of this writing.
2.3 Learning Experiments
To study learning as a function of problem size, we chose the parity and replication
(identity) problems. This facilitates comparisons with our previous simulations[6)
875
876
Alspector, Jayakumar, and Luna
(Alspector.I991 a). The parity problem is the genenilization of exclusive-OR for
arbitrary input size. It is difficult because the classification regions are disjoint with
every change of input bit. but it has only one output The goal of the replication problem
is for the output to duplicate the bit pattern found on the input after being encoded by the
hidden layer. Note that the output bits can be shifted or scrambled in any order without
affecting the difficulty of the problem. There are as many output neurons as input. For
the replication problem. we chose the hidden layer to have the same number of neurons
as the input layer. while for parity we chose the hidden layer to have twice the number as
the input layer.
=f:~~
=~~F==
0 . 20
o.
ci
_
_
_
o
!If" ","1 _ _ 1[11'01
_IIC
DIST~
_
_
_ . !If" NnEIOI$ "'1[11'01
~.
:~
o~~~~==.oo=========_====-------I_
1.]
1 . 00
......... til ""[IMS N[KNTm
o?l
o.eo
-<
'~.TllI'" :.:11
o.~
_.TlU",
o.
0.20
I~II
~~J
it
?. 20
....
0.-
1I_r1O]-]-r[~-.2-WT"~O-'-2{''-'
.,..
Figure 5. X-window Display for Learning on Chip (5a) and in Software (5b)
Fig. 5 shows the X-window display for 5 mean-field runs for learning the 4 input. 4
hidden, 4 output (4-4-4) replication on the chip (Sa) and in the simulator (5b). The user
specification is the same for both. Only the learning calculation module is different.
Both have di~plays of the network topology, the neuron states (color and pie-shaped arc
of circles) and the network weights (color and size of squares). There are also graphs of
percent correct and error (Hamming distance for replication) and one of volatility of
neuron stateS(9j (Alspector,I992) as a measure of the system temperature. The learning
curves look quite similar. In both cases, one of the 5 runs failed to learn to 100 %. The
boxes representing weights are signed currents (about 4 ~ per unit weight) in 5a and
integers from -15 to + 15 in 5b. Volatility is plotted as a function of time (j..lsec) in 5a and
shows that. in hardware (see Fig. 4), time is needed for a gain decrease at the start of the
annealing as well as for the gain increase of the annealing proper. The volatility in 5b is
Experimental Evaluation of Learning in a Neural Microsystem
plotted as a function of gain (BETA) which increases logarithmically in the simulator at
each anneal step.
ICII
5 ICII
10
4 10
60
1
60
PERCENT
CORRECT
HAMMJI?i
DISTANCE
40
2 40
HAMMING DISTANcr FQR MfT
I
o
o
o
1Cxx)
I !ell
NUMBER OF PATT'fRNS PltESEN1ED
J)
0
?
I(xx)
?
:m>
JCXX>
NUMBER OF PATTERNS PItESEN1B)
Figure 6. On-chip Learning for 6 Input Replication (6a) and Parity (6b)
Fig. 6a displays data from the average of 10 runs of 6-6-6 replication for both Boltzmann
(BZ) and mean-field (MFI) learning. While the percent correct saturates at 90 % (70 %
for Boltzmann), the output error as measured by the Hamming distance between input
and output is less than 1 bit out of 6. Boltzmann learning is somewhat poorer in this
experiment probably because circuit parameters have not yet been optimized. We expect
that a combination of noise and gain annealing will yield the best results but have not
tested this possibility at this writing. Fig.6b is a similar plot for 6-12-1 parity.
We have done on-chip learning experiments using noise and gain annealing for parity
and replication up to 8 input bits, nearly utilizing all the neurons on a single chip. To
judge scaling behavior in these early experiments, we note the number of patterns
required until no further improvement in percent correct is visible by eye. Fig. 7a plots,
for an average of 10 runs of the parity problem, the number of patterns required to learn
up to the saturation value for percent correct for both Boltzmann and mean-field learning.
This scales roughly as an exponential in number of inputs for learning on chip just as it
did in simulation[6] (Alspector,199Ia) since the training set size is exponential. The final
percent correct is indicated on the plot Fig. 7b plots the equivalent data for the
replication problem. Outliers are due to low saturation values. Overall, the training time
per pattern on-chip is quite similar to our simulations. However, in real-time, it can be
about 100,000 times as fast for a single chip and will be even faster for multiple chip
systems. The speed for either learning or evaluation is roughly 108 connections per
second per chip.
877
878
Alspector, Jayakumar, and Luna
AYBAa PfltCEHfAllE
AVEaMlEPElICEHTAIZ
alUECT ATSAlWAl10H
-.,.
n",\/
-
lQI"'mwoo~
\
~
-
m'
'ATllaNS
1II1II
l .IAIIIIIG
SATURA'I1!S
~u.~_~
M
1-
1l1li
/Qi-~M
.,.
-
~
AVfIlAGEPBC9CTAGC
COUECT AT SA1l!lAl10H
IQIWY
RJlMfT
1l1li
,
(j)
\ i I-
-M
AVERAGE PBCEI<TAIlE
. - - - - - - - - - - OJUEC'T ATSATURA l10H
.,.
n
I.
10
Ii>
_m'.vJ'1I1I
_m'~1111
Figme 7. Scaling of Parity (7a) and Replicalion (7b) Problem with Input Size
3. CONCLUSION
We have shown that Boltzmann and mean-field learning networks can be implemented in
a parallel, analog VLSI system. While we report early experiments on a single-chip
digital system, a mUltiple-chip VME-based electronic system with analog I/O is being
constructed for use on larger problems.
ACKNOWLEDGMENT:
This work has been partially supported by AFOSR contract F49620-90-C-0042, DEF.
REFERENCES
1. D.E. Rwnelhart, G.E. Hinton. cl R.I. Williams, "Learning Internal Representtiions by Error
Propagation", in Parallel Distribllled Processing: Exploralions ill ,he MicroSlruetwe of
Cognitioll. Vol. 1: FowtdaJions, D.E. Rumelhart cl 1.L. McClelland (eds.), MIT Press,
Cambridge, MA (1986), p. 318.
2. JJ. Hopfield. "Neural Networks and Physical Systems with Emergent Collective
CompUl4tional Abilities". Proc. NaJJ. Acad. Sci. USA, 79,2554-2558 (l982).
3. D.H. Ackley, G.E. Hinton. cl T.J. Sejnowski, "A Learning Algorithm for Boltzmann
Machines", Cognitive Science 9 (1985) pp. 147-169.
4. C. Peterson cl I.R. Amerson. "A Mean Field Learning Algorithm for Neural Networks",
ComplexSySlems, 1:5,995-1019, (l987).
5. 1. Alspector, B. Gupta, cl RB. Allen, ?Performance of a Stochastic Learning Microchip? in
Advances in NewaJ In/ormaJioll Processing Systems 1, D. Touretzky (ed.). Morgan-Kaufmann.
Palo Alto, (1989), pp. 748-760.
6.1. Alspector. R.B. Allen. A. layakumar, T. Zeppenfeld, &: R. Meir "Relaxation Networks for
Large Supervised Learning Problems" in Advances in NewaJ In/ormaJioll Processing Systems
3, R.P Lippmann, IE. Moody. &: D.S. Touretzky (eds.), Morgan-Kaufmann, Palo Alto. (1991),
pp. 1015-1021.
7. 1. Alspector, I.W. Gannett, S. Haber, MB. Parker, &: R. Chu, "A YLSI-Efficient Teclutique for
Generating Multiple Unoorrelated Noise Sources and Its Application to Stochastic Neural
Networks", IEEE TrOJU. CirCllUS &: Systems, 38,109, (Jan., 1991).
8. RB. Allen & J. Alspector, "Learning of Stable States in Stochastic Asymmetric Networks?',
IEEE TrOJU. NellTaJ Networks, 1,233-238, (1990).
9. 1. Alspector, T. Zeppenfeld &: S. Luna, "A Volatility Measure for Annealing in Feedback
Neural Networks", to appear in NewaJ ComplllalWft, (1m).
| 453 |@word version:1 sharpens:1 simulation:4 electronics:1 contains:1 current:7 activation:2 yet:1 chu:1 visible:1 designed:2 plot:4 update:1 accordingly:1 short:1 node:1 accessed:1 lor:1 along:1 constructed:1 beta:1 supply:2 replication:11 microchip:3 behavior:1 alspector:17 dist:1 roughly:4 simulator:2 window:3 lll:3 tlu:1 xx:1 linearity:1 circuit:1 alto:2 sparcstation:1 sharpening:1 fabricated:1 nj:1 temporal:1 pseudo:1 every:1 act:1 morristown:1 dlv:4 control:2 unit:1 appear:1 continually:1 local:3 limit:1 mfi:1 acad:1 path:2 signed:1 plus:1 chose:3 twice:1 studied:1 limited:1 range:3 acknowledgment:1 implement:2 jan:1 area:1 writing:2 microsystem:3 equivalent:1 deterministic:1 annealed:2 go:1 williams:1 dly:1 rule:5 adjusts:1 utilizing:1 array:1 i1i:1 play:1 user:1 us:2 logarithmically:1 rumelhart:1 zeppenfeld:2 asymmetric:2 bottom:1 ackley:2 negligibly:1 module:1 capture:2 rwnelhart:2 wj:1 cycle:1 region:1 decrease:1 digitally:1 ui:3 wjj:1 dynamic:1 trained:1 icii:2 l982:1 hopfield:2 chip:24 emergent:1 represented:1 fast:1 sejnowski:1 whose:1 encoded:1 larger:2 iic:1 quite:2 ability:1 final:1 online:1 sequence:1 rr:1 net:2 mb:1 generating:1 volatility:4 depending:1 coupling:1 completion:1 oo:1 measured:4 dep:1 sa:1 eq:3 implemented:4 come:1 judge:1 direction:1 waveform:2 closely:1 correct:6 stochastic:5 sgn:1 settle:1 require:1 tighter:1 adjusted:2 hall:1 mapping:1 early:2 purpose:1 proc:1 tanh:1 currently:1 palo:2 weighted:1 mit:1 gaussian:1 occupied:1 varying:1 voltage:5 ylsi:1 unclamped:1 improvement:1 mainly:1 contrast:1 nn:1 vl:1 hidden:5 vlsi:4 i1:1 overall:1 classification:2 ill:1 bellcore:1 summed:1 ell:1 field:10 shaped:1 lit:1 look:1 nearly:1 report:2 duplicate:1 tlli:1 composed:1 individual:1 phase:7 amplifier:1 conductance:1 possibility:1 evaluation:4 adjust:1 extreme:1 cory:1 poorer:1 necessary:1 desired:1 circle:1 plotted:2 instance:1 delay:1 teacher:2 eec:1 thoroughly:1 ie:1 probabilistic:1 off:1 contract:1 moody:1 uau:1 slowly:1 luna:5 external:2 cognitive:1 jayakumar:5 leading:1 til:1 li:1 potential:1 orr:1 student:2 sec:1 view:1 memorizes:1 portion:2 start:2 parallel:6 square:1 figme:1 loaded:1 kaufmann:2 expandable:1 yield:1 vme:2 processor:1 synapsis:4 touretzky:2 synaptic:1 ed:3 pp:3 involved:2 di:1 static:1 hamming:3 gain:16 color:2 schedule:1 back:4 bidirectional:1 supervised:1 lqi:1 synapse:7 done:1 though:1 box:1 just:2 implicit:1 correlation:5 until:2 propagation:5 indicated:1 usa:1 building:1 read:1 volt:11 latch:2 during:9 width:1 ll:3 m:4 demonstrate:1 performs:1 allen:4 interface:2 pro:1 temperature:3 percent:6 ranging:1 instantaneous:1 physical:2 analog:3 he:1 approximates:1 l1li:2 ims:1 measurement:1 mft:1 cambridge:1 analyzer:2 specification:1 stable:1 store:2 joshua:1 morgan:2 somewhat:1 eo:1 signal:4 ii:3 full:1 multiple:5 match:1 faster:1 calculation:1 post:1 qi:1 multilayer:1 bz:1 panern:1 affecting:1 annealing:19 interval:1 source:3 rest:1 probably:1 facilitates:1 integer:1 feedforward:1 stephan:1 fit:1 topology:1 whether:1 jj:1 clear:1 hardware:4 mcclelland:1 reduced:2 meir:1 shifted:1 sign:1 disjoint:1 per:6 rb:2 patt:1 vol:2 graph:1 relaxation:2 sum:1 run:5 powerful:1 family:1 almost:1 electronic:4 earning:1 decision:1 cxx:1 scaling:3 bit:6 layer:6 uff:1 def:1 display:3 software:3 lsec:1 speed:4 expanded:1 combination:1 disconnected:1 outlier:1 fqr:1 pr:1 taken:1 computationally:1 previously:1 bus:2 needed:1 instrument:1 end:1 photo:1 available:1 operation:2 binomial:1 top:1 uj:1 bl:1 exclusive:2 div:1 distance:3 separate:2 simulated:1 sci:1 presynaptic:1 length:1 difficult:1 pie:1 trace:2 implementation:4 proper:1 boltzmann:14 adjustable:2 perform:2 allowing:1 i_:1 collective:1 neuron:40 arc:1 finite:1 displayed:1 saturates:1 hinton:2 arbitrary:1 cast:1 required:2 connection:4 optimized:1 california:1 address:1 proceeds:2 dynamical:1 pattern:16 below:1 saturation:2 reliable:1 haber:1 ia:1 difficulty:1 settling:4 cascaded:1 representing:1 eye:1 isn:1 gannett:1 text:1 afosr:1 expect:1 interesting:1 proportional:1 generator:2 digital:3 uncorrelated:1 lo:1 supported:1 parity:9 free:1 allow:1 peterson:2 f49620:1 feedback:6 calculated:1 curve:1 forward:1 made:2 adaptive:2 sj:3 lippmann:1 summing:2 assumed:1 scrambled:1 un:1 learn:4 nature:2 transfer:3 ca:1 channel:4 investigated:1 complex:1 anneal:1 anthony:1 cl:5 vj:3 did:1 noise:17 allowed:1 fig:14 board:1 parker:1 explicit:1 exponential:2 clamped:3 ito:1 externally:2 showing:1 offset:2 gupta:1 dl:1 sequential:1 ci:1 magnitude:1 clamping:2 easier:1 vii:2 failed:1 adjustment:2 partially:1 monotonic:1 ch:1 ma:1 identity:1 presentation:4 goal:1 ii1ii:1 gation:1 change:3 wt:1 pas:1 experimental:4 perceptrons:1 internal:1 cascadable:2 tested:1 correlated:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.